124
Master Thesis Software Engineering December 2011 School of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden Automated Software Testing A Study of State of Practice Dudekula Mohammad Rafi Katam Reddy Kiran Moses

Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

  • Upload
    others

  • View
    38

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

Master Thesis

Software Engineering

December 2011

School of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden

Automated Software Testing A Study of State of Practice

Dudekula Mohammad Rafi

Katam Reddy Kiran Moses

Page 2: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

ii

This thesis is submitted to the School of Engineering at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering. The thesis is equivalent to 20 weeks of full time studies.

Contact Information:

Authors: Rafi Mohammad Dudekula Address: Lindblomsvägen 107,Lgh 1102, Ronneby-37233 E-mail: [email protected] Kiran Moses Katam Reddy Address: Lindblomsvägen 98,Lgh 1101, Ronneby-37233 E-mail: [email protected]

University advisor: Dr. Kai Petersen School of Engineering Blekinge Institute of Technology [email protected] Ericsson AB, Karlskrona

[email protected]

School of Computing Blekinge Institute of Technology SE-371 79 Karlskrona Sweden

Internet : www.bth.se/com Phone : +46 455 38 50 00 Fax : +46 455 38 50 57

Page 3: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

1

ACKNOWLEDGMENT

We would like to express our profound gratitude to our thesis supervisor, Dr. Kai Petersen, for his guidance, supervision and support throughout this study. Our special thanks also go to the BTH staff of this noble institution for their service and kind support. We would like to thank Blekinge Institute of Technology (BTH) for providing free study opportunity and international study environment which would no doubt add value to our future careers and endeavours. We are indeed thankful to our families and friends for their endless support and encourage throughout this study. It would not be possible without all of you.

Page 4: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

2

ABSTRACT

Context: Software testing is expensive, labor intensive and consumes lot of time in a software development life cycle. There was always a need in software testing to decrease the testing time. This also resulted to focus on Automated Software Testing (AST), because using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease [11]. Manual Testing (MT) requires lot of effort and hard work, if we measure in terms of person per month [11]. Automated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems are cheap, they are faster and don‘t get bored and can work

continuously in the weekends. Due to this advantage many researches are working towards the Automation of software testing, which can help to complete the task in less testing time [10]. Objectives: The main aims of this thesis is to

1.) To systematically classify contributions within AST.

2.) To identify the different benefits and challenges of AST.

3.) To identify the whether the reported benefits and challenges found in the literature are

prevalent in industry.

Methods: To fulfill our aims and objectives, we used Systematic mapping research methodology to

systematically classify contributions within AST. We also used SLR to identify the different benefits

and challenges of AST. Finally, we performed web based survey to validate the finding of SLR.

Results: After performing Systematic mapping, the main aspects within AST include purpose of

automation, levels of testing, Technology used, different types of research types used and frequency

of AST studies over the time. From Systematic literature review, we found the benefits and challenges

of AST. The benefits of AST include higher product quality, less testing time, reliability, increase in

confidence, reusability, less human effort, reduction of cost and increase in fault detection. The

challenges include failure to achieve expected goals, difficulty in maintenance of test automation, Test

automation needs more time to mature, false expectations and lack of skilled people for test

automation tools.

From web survey, it is observed that almost all the benefits and challenges are prevalent in industry.

The benefits such as fault detection and confidence are in contrary to the results of SLR. The

challenge about the appropriate test automation strategy has 24 % disagreement from the respondents

and 30% uncertainty. The reason is that the automation strategy is totally dependent on the test

manager of the project. When asked “Does automated software testing fully replace manual testing”,

80% disagree with this challenge.

Conclusion: The classification of the AST studies using systematic mapping gives an overview of the

work done in the area of AST and also helps to find research coverage in the area of AST. These

results can be used by researchers to use the gaps found in the mapping studies to carry on future

work. The results of SLR and web survey clearly show that the practitioners clearly realize the

benefits and challenges of AST reported in the literature.

Keywords: Automated software testing (AST), Manual Testing (MT), automated test case generation & selection (ATDGS), automated test data generation & selection (ATCGS)

Page 5: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

3

TABLE OF CONTENTS

1 INTRODUCTION ....................................................................................................................... 7

1.1 THESIS STRUCTURE ............................................................................................................... 8 1.2 TERMINOLOGY ..................................................................................................................... 10

2 BACKGROUND AND RELATED WORK ............................................................................ 11

2.1 TESTING TYPES ............................................................................................................... 11 2.1.1 White Box Testing ........................................................................................................... 11 2.1.2 Black Box Testing ........................................................................................................... 11

2.2 TESTING LEVELS ............................................................................................................ 12 2.2.1 Unit Testing .................................................................................................................... 12 2.2.2 Integration testing ........................................................................................................... 12 2.2.3 System testing .................................................................................................................. 12 2.2.4 Acceptance testing .......................................................................................................... 12

2.3 REASONS TO AUTOMATE SOFTWARE TESTING ...................................................................... 13 2.4 BENEFITS AND CHALLENGES OF AUTOMATED SOFTWARE TESTING ...................................... 13 2.5 RELATED WORK .................................................................................................................. 13

2.5.1 Choice of research methods ............................................................................................ 14 2.5.2 Systematic mapping ........................................................................................................ 15 2.5.3 Systematic Literature Review .......................................................................................... 16

3 RESEARCH DESIGN ............................................................................................................... 18

3.1 SYSTEMATIC MAPPING DESIGN ............................................................................................. 19 3.1.1 S.M Step 1.) Definition of Research Questions (Review Scope) ..................................... 19 3.1.2 S.M Step 2.) Conduct search (All studies) ...................................................................... 19 3.1.3 S.M Step 3.) Pilot selection procedure (Determine level of interpretation) .................... 22 3.1.4 S.M Step 4.) Screening of studies (Relevant papers) ...................................................... 22 3.1.5 S.M Step 5.) Key wording using abstracts (Developing classification scheme) ............. 23 3.1.6 S.M Step 6.) Data Extraction & Mapping Process (Systematic Map) ............................ 24

3.2 SYSTEMATIC LITERATURE REVIEW DESIGN ......................................................................... 26 3.2.1 S.L.R step 1.) Defining Research Questions ................................................................... 27 3.2.2 S.L.R step 2.) Define Search Strategy ............................................................................. 27 3.2.3 S.L.R step 3.) Pilot selection procedure .......................................................................... 27 3.2.4 S.L.R step 4.) Study selection criteria ............................................................................. 27 3.2.5 S.L.R step 5.) Study Quality Assessment ......................................................................... 28 3.2.6 S.L.R step 6) Data Extraction Strategy ........................................................................... 29 3.2.7 S.L.R step 7) Data Synthesis Strategy ............................................................................. 29

4 RESULTS AND ANALYSIS..................................................................................................... 31

4.1 SYSTEMATIC MAPPING ................................................................................................. 31 4.1.1 Primary studies selection ................................................................................................ 31 SM-RQ1 what types of contributions are presented in the selected ............................................. 33 studies? ........................................................................................................................................ 33 SM-RQ1.1 what are the different aspects of AST within the selected .......................................... 35 studies? ........................................................................................................................................ 35 4.1.2 SM-RQ1.2 what type of studies with respect to Technology (programming

language/platforms/interface) are discussed in the selected studies? ......................................... 40 4.1.3 SM-RQ1.3 what is the frequency of the selected studies over the time? ......................... 41 4.1.4 SM-RQ1.4 which research types are used in the selected studies?................................. 42

4.2 SYSTEMATIC LITERATURE REVIEW ...................................................................................... 44 4.2.1 SLR results overview: ..................................................................................................... 45 4.2.2 Data analysis of SLR....................................................................................................... 46

Page 6: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

4

5 SURVEY ..................................................................................................................................... 57

5.1 QUESTIONNAIRE DESIGN ............................................................................................. 57 5.2 PILOT SURVEY ................................................................................................................. 57 5.3 SURVEY RESULTS AND ANALYSIS ............................................................................. 59

5.3.1 DEMOGRAPHIC QUESTIONS...................................................................................... 59 5.3.2 Results and Analysis of questions related to benefits of AST .......................................... 60 5.3.3 Results and Analysis of questions related to challenges of AST ..................................... 67 5.3.4 Comparative analysis of the benefits and challenges obtained from SLR and SURVEY 73

6 VALIDITY THREATS ............................................................................................................. 84

7 CONCLUSION .......................................................................................................................... 86

7.1 MAJOR FINDING IN SYSTEMATIC MAPPING ........................................................................... 86 7.2 MAJOR FINDINGS IN SYSTEMATIC LITERATURE REVIEW ...................................................... 87 7.3 MAJOR FINDINGS IN SURVEY ............................................................................................... 88 7.4 FUTURE WORK ..................................................................................................................... 88

8 REFERENCES ........................................................................................................................... 90

APPENDIX A KAPPA ANALYSIS ...................................................................................................... 95 APPENDIX B SYSTEMATIC MAPPING STUDIES ................................................................................ 96 APPENDIX C SYSTEMATIC LITERATURE REVIEW STUDIES ........................................................... 112 APPENDIX D SURVEY QUESTIONNAIRE ....................................................................................... 114 APPENDIX E RESEARCH TYPES .............................................................................................. 121

APPENDIX F TESTING LEVELS……………………………………………………………….120

Page 7: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

5

LIST OF TABLES Table 1: Description of Research questions for systematic mapping .................................... 19 Table 2: Description of Population and Intervention technique based on Research questions

for systematic mapping. .................................................................................................. 20 Table 3: Final list of keywords for performing systematic mapping study ........................... 21 Table 4: Detailed inclusion criteria for selecting primary studies ......................................... 22 Table 5: Detailed exclusion criteria for selecting primary studies......................................... 23 Table 6: Data Extraction categories for selected systematic mapping studies ...................... 24 Table 7: Research questions for SLR .................................................................................... 27 Table 8: Inclusion criteria for Systematic Literature review ................................................. 28 Table 9: Exclusion criteria for Systematic Literature review ................................................ 28 Table 10: Quality assessment criteria .................................................................................... 28 Table 11: Data Extraction Strategy ........................................................................................ 29 Table 12: Data Synthesis Strategy ......................................................................................... 29 Table 13: Execution of search queries on different databases ............................................... 31 Table 14: Categorization of studies based on purpose of automation ................................... 36 Table 15: Categorization of studies based on the aspect technology used. ........................... 39 Table 16: Categorization of studies based on Research type ................................................. 40 Table 17: Research methodology used and the number of the articles .................................. 45 Table 18: Frequency of SLR studies over the period of the time .......................................... 46 Table 19: SLR data synthesis................................................................................................. 47 Table 20: SLR studies related to benefits of the Automated Software Testing ..................... 51 Table 21: SLR studies related to Challenges of the Automated Software Testing ................ 54 Table 22: Number of respondents for survey working on different application domains ..... 60 Table 23: Respondents working experience .......................................................................... 60 Table 24: Comparison of the benefits obtained from SLR and SURVEY ............................ 74 Table 25: Rating of the AST benefits based on the respondents from the survey ................. 75 Table 26: Comparison of the challenges obtained from the SLR and SURVEY .................. 76 Table 27: Rating of the AST challenges based on the respondents from the survey ............. 77

Page 8: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

6

LIST OF FIGURES Figure 1: Structure of thesis..................................................................................................... 9 Figure 2: View of White Box Testing ................................................................................... 11 Figure 3: View of Black Box Testing .................................................................................... 11 Figure 4: V-Model [6] .......................................................................................................... 12 Figure 5: Research Design ..................................................................................................... 18 Figure 6: Selection of primary studies for Systematic mapping ............................................ 33 Figure 7: Overall Bubble plot for AST studies ...................................................................... 34 Figure 8: Classification of AST ............................................................................................. 35 Figure 9: Frequency of papers based on the testing levels .................................................... 38 Figure 10: Categorization of articles based on the technology .............................................. 41 Figure 11: Research types over the years 1999-2011 ............................................................ 42 Figure 12: Categorization of AST studies based on the Research types ............................... 43 Figure 13: Description of Systematic literature review ......................................................... 44 Figure 14: Pie chart showing the number of articles selected for conducting SLR ............... 45 Figure 15: Survey pilot process used ..................................................................................... 58 Figure 16: Percentage of respondents distributed based on the role ...................................... 59 Figure 17: Automated Software Testing provides more confidence in the quality of the

product and increases the ability to meet schedules ....................................................... 61 Figure 18: Automated testing can improve the product quality by better test coverage ....... 62 Figure 19: High reusability of the tests makes automated testing productive ....................... 63 Figure 20: By having a complete automation it reduces the cost of software testing

dramatically and also facilitates continuous testing. ...................................................... 64 Figure 21: Automated software testing saves time and cost as it can be re-run again and

again and they are much quicker than manual testing with no additional cost ............... 65 Figure 22: Automated software testing facilitates the high fault detection ........................... 66 Figure 23: Automated software testing enables the repeatability of tests, which gives the

possibility to do more tests in less time .......................................................................... 66 Figure 24: Tester should have enough technical skills to build successful automation ........ 68 Figure 25: Automated testing needs extra effort for designing and maintaining test scripts 69 Figure 26: Compared with manual testing, automated software testing ................................ 70 Figure 27: Automated software testing requires less effort on the developer's side, but

cannot find complex bugs as manual software testing does ........................................... 70 Figure 28: The investment in application-specific test infrastructure, can significantly reduce

the extra effort that test automation requires from testers. ............................................. 71 Figure 29: compared with manual testing, the cost of automated testing is higher, especially

at the beginning of the automation process. However, automated software testing can be productive after a period of time. ................................................................................... 72

Figure 30: Most of the automated testing tools available in the market are incompatible and does not provide what you need or fits in your environment. ........................................ 73

Figure 31: Results showing for the survey question, ―Does automated software testing fully

replace manual testing?‖ ................................................................................................. 78 Figure 32: Percentage of respondents based on the testing approach .................................... 79 Figure 33: Percentage of respondents based on the software development method used. .... 79 Figure 34: Cross analysis based on the test approach used for SQ 15. .................................. 80 Figure 35: Cross analysis based on the software development approach used for SQ 15. .... 81 Figure 36: Satisfaction level of respondents on AST ............................................................ 81 Figure 37: Cross analysis based on the test approach used for SQ 16 ................................... 82 Figure 38: Cross analysis based on the software development approach used for SQ 16. .... 83

Page 9: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

7

1 INTRODUCTION The concept of software testing has evolved since 1970‘s as an important process in

Software development, because through it quality of the software can be improved by checking the errors and faults present in the software. According to Burnstein [1], ―Software

Testing is generally described as a group of procedures carried out to evaluate some aspect

of a piece of software‖ or ―Software Testing can be described as a process used for revealing

defects in software, and for establishing that the software has attained a specified degree of

quality with respect to selected attributes‖. The development of software is a complex activity and with the software systems growing in size the development of software has become more complex. At the same time it becomes difficult to maintain quality as the systems grow in size and complexity. As the quality becomes important concern software testing is one area to focus on in order to improve quality [2]. Software testing can be done manually and automatically. In manual testing, the testing requires human input, analysis and evaluation. Manual testing deals with human intervention, So naturally, it is prone to errors because often humans get tired of doing the process repeatedly [3]. According to Dustin et.al [4], ―Automated Software Testing (AST) is a process in which the testing activities are

automated which include development of test cases, execution and verification of the test

scripts and use of automated tools‖. In Automated Software Testing (AST) all the tests are not automated, so it is important to determine what test cases should be automated first. The advantage of Automated Software Testing (AST) is dependent on how many times a given test can be repeated [2]. If the automation is done quickly at initial stages it can lead to poorly automated tests which are difficult to maintain and vulnerable to software changes. Most of the companies are opting for Automated Software Testing (AST) to achieve benefits such as quality improvement, time to market and less human effort, but various studies show that achieving this is not easy [2]. Achieving efficient Automated Software Testing (AST) is dependent on how to perform tests within a shorter time and with less effort. Organizations are experiencing problems with maintainability and time consuming development of automated testing tools [2]. There are plenty of tools available in the industry to solve this problem. Different tools were developed depending on programming language and testing methods, but selecting a tool is required to support most of the organization‘s testing

requirements. Literature has paid lot of attention to the field of Automated Software Testing (AST) by reporting different automation techniques, methods, approaches and tools. As there are many article published in this area, there is need to structure and evaluate the area of AST. In order to achieve this, there is need to systematically classify the different aspects of AST. Torkar [3] has conducted literature study and established few definitions related to testing and AST, based on those definitions he categorized software testing and AST. Regarding the categorization of Automated Software Testing (AST) tools there is a paper published by Sergey Uspenskiy [28]. He developed an automated test model to classify the tools so that the software tester can obtain a tool or a list of tools that is most suitable for concrete tasks. Even though these articles are very valuable to area of AST these articles do not cover all the aspects of AST such as testing levels, interfaces used (languages and scripting languages) and there is a need to fill the gap. To achieve this goal, we employed Systematic mapping research methodology to systematically categorize different aspects of AST. To get results for the above mentioned research gap; Systematic mapping is best research method as it allows to classify a large set of papers in an efficient way.

Page 10: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

8

Automated Software Testing (AST) is not a silver bullet; it also has its limitations. It cannot be applied to every test. Berner [9] found that the majority of the new defects are detected by manual tasks not the automated ones because 60% of the bugs are found during an automated testing effort and 80% are found during the development of the tests. There are some empirical studies by practitioners [2, 8, 9, 11, 12] on the AST benefits and challenges such as Consistency and repeatability of tests, reuse of tests, earlier time to market, expectation that automated tests will find a lot of new defects, false sense of security and maintenance. There is still need to explore the benefits and challenges of AST. In order to achieve this, we performed systematic literature review to find the empirical evidence regarding the benefits and challenges of AST. Finally, survey is conducted to find whether the benefits and challenges of AST reported in the literature are prevalent in software industry. The aim of this research paper is to systematically classify the current research literature in the area of the Automated Software Testing (AST) and to find empirical evidence regarding the AST benefits and challenges. The aims will be satisfied by following objectives

To classify different contributions within Automated Software Testing (AST). To identify the benefits of Automated Software Testing (AST) reported in the

literature To identify the challenges of Automated Software Testing (AST) reported in the

literature. To identify the whether the reported benefits and challenges found in the literature

are prevalent in industry.

1.1 Thesis Structure

In Introduction, we present a description about Automated Software Testing (AST).We present the main aim of the thesis and its objectives and the terminology used in the thesis. In Section 2, we provide the necessary background information for understanding the concept of Automated Software Testing (AST) and the need for automation of the software testing before continuing with the next sections. This section also presents the detailed explanation of related works regarding the AST classification, benefits and challenges of AST and current research gap that will be presented in this thesis. In the Section 3, our research design is presented. In this section, the research methodologies used to address the research questions is explained. In the Section 6, we will present the validity threats to this research.

In the Section 4, results and detailed analysis of the research will be presented. Section 4.1 will present the detailed analysis of the results of Systematic Mapping (SM). Section 4.2 will be present the detailed analysis of the results of Systematic Literature Review (SLR). Section 5 deals with survey conducted to validate the finding in SLR. Questionnaire design and detailed analysis of the results obtained from survey are presented in this section. Finally all the work conducted in this thesis is presented in the summarized form and the major findings for each research methodology conducted are also presented. In addition to that, future work related to this thesis is presented.

Page 11: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

9

INTRODUCTION

Aims and Objectives

Thesis Structure

Figure 1: Structure of thesis

BACKGROUND

Basic testing types

Related Work

Testing levels

Reasons to automate AST

RESEARCH DESIGN

Systematic mapping Design

Systematic Literature Review Design

RESULTS AND ANALYSIS

Systematic mapping Design results and analysis

Systematic Literature Review Design results and analysis

SURVEY

Survey results and analysis

Pilot Survey

VALIDITY THREATS

FUTURE WORK & CONCLUSION

Findings in SM, SLR and web survey with conclusion

Thesis Structure

Terminology

INTRODUCTION

Questionnaire Design

Page 12: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

10

1.2 Terminology

AST Automated Software Testing

MT Manual Testing

SLR Systematic Literature Review

SM Systematic Mapping

ATDGS Automated test data generation & selection

ATCGS Automated test case generation & selection

GUI Graphical User Interface

WBT White Box Testing

BBT Black Box Testing

AAT Automated Acceptance Testing

Page 13: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

11

2 BACKGROUND AND RELATED WORK Software plays an important role in our lives both economically and socially, so it is necessary to develop quality software [1]. As the quality becomes an issue, software testing is the area to be concentrated to improve quality [2]. Software testing is an art which evaluates a product or program or system to determine that it reaches the desired results [4]. Software testing accounts for upto 50% of the total cost of software development [12]. In order to reduce the cost of manual software testing researchers are working towards increasing the automation of software testing. Before going deeper into Automated Software Testing (AST), the descriptions of the basic terminologies of testing are discussed.

2.1 TESTING TYPES

2.1.1 White Box Testing White Box Testing is also known as structural testing or glass-box testing. In White Box Testing the software engineer derives test cases using the knowledge concerning the internal structure of the software [3].

Figure 2: View of White Box Testing

2.1.2 Black Box Testing In Black Box Testing the software engineer views the external part(specification/interface) of software instead internal part. It is usually based on the specification of the program interface, such as procedure and function headers. It also needs to specify the program input and the expected program output [3].

Figure 3: View of Black Box Testing In the software development life cycle, software testing is an important phase but there exists problems when testing is considered late in the software development life cycle. John Watkins [6] proposed a ‗V-Model‘ in which every development activity has an alternative testing activity. Figure 4 shows the four phases of development and corresponding testing activity.

Page 14: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

12

Acceptance

testingP1Requirements

Functional Specification

Design

Coding

Integration testing

System testing

Unit testing

P2

P3

P4

Testing at

different levels

Software development Life

cycle

Pn

P1

P1- Plan acceptance

tests

P2- Plan Integration

tests

P3- Plan system

tests

P4- Plan unit tests

Write

tests Run

tests

Figure 4: V-Model [6]

2.2 TESTING LEVELS Traditionally, the most common types of testing include [1] [2], unit testing, Integration testing, System testing and Acceptance testing.

2.2.1 Unit Testing Unit testing is the basic level testing in order to find the errors in the software program. In Unit testing the software is divided into small units to find the errors in the software program [6].

2.2.2 Integration testing The objective of integration testing is to determine that the software modules interact together in a correct and stable manner [6].

2.2.3 System testing During this phase, developers test the system‘s functionality and stability as well as non-functional requirements such as performance and reliability [6].

2.2.4 Acceptance testing The defects that are found in the system testing are corrected and the system will be tested by the customer for acceptance. In Acceptance testing the customer test whether the system meets the requirements and works correctly [6]. Automated software testing plays a vital role in the field of software engineering. This process is an excellent idea to improve the quality of software according to standards in an organization. In addition it also increases the efficiency of the product. The software test

Page 15: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

13

automation is proved to be the strongest weapon in the complex field of effective software testing [7].

2.3 Reasons to automate software testing According to [9], if test execution is performed manually then it is considered as inefficient and error prone whereas test execution which is performed automatically increases the efficiency in such a way it reduces the work of the testers. Automated test case execution helps in reducing the cost because it decreases human involvement. There are benefits as well as challenges in automated software testing.

2.4 Benefits and challenges of automated software

testing According to Fewster [2], Kaner et al. [29], Rice [30] and Pettichord [14] the common benefits and challenges of Automated Software Testing (AST) include Benefits:

Run more tests more often. Perform tests which would be difficult or impossible to do manually. Better use of resources. Consistency and repeatability of tests. Reuse of tests. Earlier time to market. Increased confidence.

Challenges: Unrealistic expectations: Mangers may believe that Automated Software Testing

(AST) can solve their problems and improve quality. Poor testing practice. Expectation that automated tests will find a lot of new defects. False sense of security. Maintenance. Technical problems. Organizational issues.

2.5 Related Work Regarding related work in this area, Torkar [3] performed a literature study in this area by classifying general aspects of testing and automated testing by establishing sub areas such as

Test creation Test execution Result collection Result evaluation Test quality analysis

He used previous studies related to these areas and explained what has been done previously regarding the above respective areas. He also developed a model to classify the different aspects of AST. For this model author proposed few definitions, using these definitions a model was developed which can be used to compare, classify or elaborate on automated software testing. Later, the validity of the model is checked by applying it on three samples from a random sample and by comparing two similar techniques. According to the author, this model can be used to classify the tools and techniques, which will help the research

Page 16: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

14

community to decrease the time spent to understand the aspects of AST. The main point in this paper is that we use the same classification used by Torkar [3] to categorize the aspect the purpose of automation. This research papers also explores the benefits and challenges of AST. Below we present the related work regarding the purpose of AST. Automated Software Testing (AST) is an alternative which can be replaced in the place where there is need for the adequate testing. The testing time can be reduced to minutes which take hours to run manually [2]. Fewster [2] describes that we do not have to automate everything to get benefits from test automation and if 10% of the tests are run 90% of the time then all time will be invested in testing. Fewster [2] recommends few factors that should be taken into account for when to decide and what to automate, including

Most important tests. A set of breadth tests (sample each system area overall). Tests for the most important functions. Tests those are easiest to automate. Tests that will give the quickest payback. Tests that are run the most often.

Automating the important tests first will yield better results with greater payback .The test to be automated will be depending on how important it is, some tests are more important than others and every time anything changes it should be run .Some tests should be run if the particular function changes[2]. Automation of tests can reduce the test engineer's workload, improve the software quality and shortens the testing time. There are lots of benefits in using Automated Software Testing (AST), but achieving it is not an easy task. Several authors have reported benefits as well as challenges in using Automated Software Testing (AST). In our thesis, we used Systematic mapping research methodology to classify different contributions (such as Model, Method, Tool etc.,) within AST. In addition to that, different aspects of AST such as technology used, research type, frequency of studies over the time and testing levels within the selected literature are classified. Later, we performed systematic literature review to find the empirical evidence regarding the benefits and challenges of AST. In addition to that, we also performed the web based opinion survey to find the whether the AST benefits and challenges reported in the literature are prevalent in the software industry. This helped us to validate the results of SLR.

2.5.1 Choice of research methods

In our thesis we will use systematic mapping (SM), systematic literature review (SLR) and survey as our research methodologies. To perform these SM and SLR methodologies, we followed Kitchenham‘s guidelines [15, 16] which provide a repeatable and systematic process in order to answer the research questions. These two methodologies are much related to each other where the early stages of the systematic mapping are very similar to systematic review [22]. In some way, we can say that systematic mapping is the precursor for the systematic review. According to kitchenhamn et.al [15], ―A systematic mapping study allows

the evidence in a domain to be plotted at a high level of granularity. This allows for the

identification of evidence clusters and evidence deserts to direct the focus of future

systematic reviews and to identify areas for more primary studies to be conducted‖ and ―A

Systematic review is a means of identifying, evaluating and interpreting the all evidence of

researches related to selected topic of interested research area‖. In this thesis, Systematic mapping and Systematic literature review are considered to be best research methodologies to get the desired results. The reason being the AST is a mature area

Page 17: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

15

and lot of work is already done in this area. AST deals with vast amount of literature and our aim was to systematically classify the contributions within AST. Another important aspect is repeatability is not given by normal literature review as often it is not well documented what is done. In SM each and every step is clearly documented to identify the primary studies [21].This can also be considered as a good step towards a systematic review. In this thesis, Systematic literature review complements the systematic mapping. If we go to the literature concepts of both systematic mapping and systematic review, we found some required steps to be followed for our research. According to [25] the process of systematic mapping consists of five essential steps. They are: (i) definition of research questions (research

scope), (ii) conducting search (all studies), (iii) screening of papers (relevant papers), (iv)

key wording of abstracts(classification scheme), and (v) data extraction and mapping

(systematic map).In our research systematic mapping is followed by the systematic review. According to[16] there are seven steps included in systematic review , they include (i)

Research questions (ii) Conducting search strategy, (iii) Inclusion/exclusion criteria(iv) pilot

selection (v) data quality assessment, (vi) data extraction strategy ,(vii) data synthesis. If we observe the steps (ii), (iii) and (iv) they are similar and the primary studies selected for systematic mapping can be used to get desired results by Systematic review. SLR is used to summarize the empirical evidence regarding the benefits and challenges of AST. According to [16], SLR is a repeatable process with predefined search strategy to comprehensively aggregate the published literature. In our thesis, we need to find empirical evidence regarding the benefits and challenges of AST. We think SLR is the best research method to employ here because the predefined strategy it provides helps to find the primary studies through an hat allows to systematically synthesize evidence. [16]. Finally, we conducted a survey to find whether the benefits and challenges of AST reported in literature are prevalent in industry. We found that survey is the best option to get desired results, when compared other research methods like case studies and experiments. Experiments are not considered to be appropriate to our study as it is a control study [59]. Case study and survey are well known research methods for conducting the sampling study. Case studies are limited to detailed analysis of a particular situation, but whereas survey allows us to collect huge number of samplings [58]. So we selected Web survey , given that we would like to identify whether challenges and benefits of AST are common in the software industry at large.

2.5.2 Systematic mapping

The main purpose behind using systematic mapping is to provide a broader overview of the research area by the clear investigation of the available research in the field of interest and to show frequency of studies in that field [15, 16, 17, 18]. Systematic mapping is a methodology that is frequently used in medical research and is deserted in the area of Software Engineering. Recently, few studies on Systematic mapping research methodology have been performed, they include, Naseer et al. [19] performed systematic mapping to classify the contributions within Value Based Software Engineering (VBSE) and systematic review to empirically investigate and to determine the practical usability and usefulness of Value Based Software Engineering (VBSE) studies. Bailey et al. [20] performed a systematic mapping on software design methods for determining to how much level the empirical evidence supports the software design methods. Mujtaba et al. [21] performed systematic mapping to classify and map the studies related to software product lines. David et al. [22] performed a systematic mapping by reviewing the mapping studies to specify the challenges and usefulness of systematic mapping studies. Dybå and Dingsøyr [23] performed systematic mapping to find out the empirical studies related to agile software development. Jalali and Wohlin [24] performed systematic mapping studies on agile practices in global software development to find out the under which circumstances agile practices have been applied efficiently in global software development and also to classify the studies on various aspects such as research type and distribution. In addition, Afzal at al. [26] performed

Page 18: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

16

systematic mapping to classify and review studies the studies based on the application of application of search –based optimization techniques and non-functional testing. According to [25], the main goals of systematic mapping study are to plot the frequencies of studies to get the overview of the studies over the time and to categorize the studies based on the research type. According [21, 25], the systematic mapping study consists of following steps 1) Definition of Research Questions (Review Scope) The research questions are formulated in such a way that they satisfy the goals of the systematic mapping of the study [21, 25]. 2) Conduct search (All studies) The search queries are mainly intended to identify the primary studies and search string can be formulated by using population, intervention, comparison and outcome technique [16, 21, 25]. The search strings are applied to the databases such as IEEE, Engineering village, Scopus, ACM and Google scholar. 3) Pilot selection procedure (Determine level of interpretation) As there are two researchers in this study, this pilot selection procedure helps the researchers to develop the common understanding between the researchers [62]. Samples of research papers are taken from the database and kappa Cohen‘s value is calculated to determine the level of understanding between the researchers. 4) Screening of studies (Relevant papers) Relevant papers are obtained by individually applying inclusion criteria and non-relevant papers are excluded by individually applying exclusion criteria. This step excludes the studies which are not relevant to answer the research questions [16]. 5) Key wording using abstracts (Developing classification scheme) According to [16], Key wording is done to reduce the time needed for developing the classification scheme and it can be done in two ways. First, to go through the abstracts of the relevant papers, identify context of the research area and to define categories by combining set of keywords [16]. Secondly, if the abstract is not giving enough information regarding the context of research area, introduction and conclusion part is reviewed to prepare the final set of keywords by developing set of keywords for systematic mapping [21, 25]. 6) Data Extraction & Mapping Process (Systematic Map) After developing the classification scheme, the studies which are identified in step 3(Relevant papers) are categorized based on classification scheme [21, 25], Later, a bubble graph is drawn to show the systematic mapping of studies [21, 25].

2.5.3 Systematic Literature Review Systematic review has been broadly used in research areas such as psychological sciences, statistical sciences, education, industrial/organizational psychology, medicine, health sciences domain, and software engineering [27]. Kitchenham [15] proposed guidelines for systematic review that is appropriate for software engineering researchers. Based on these

Page 19: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

17

guidelines a lot of Systematic Literature Reviews had been done in the field of Software Engineering. The results of systematic mapping reveal research areas appropriate for performing systematic review. The main motivation for conducting systematic review is to summarize and aggregate evidence presented in primary research studies [15].It presents a clear picture of a specified area by identifying, interpreting and evaluating all the available studies [15].Systematic review is a more suitable method to summarize evidence in automated software testing and to identify research gap than classical reviews, these often not being repeatable and extendable due to lack of rigor [15,16]. There are many research studies conducted in the field of AST. Our systematic Literature review aim is to find the challenges and benefits of AST. The systematic review consists of the following steps [15, 16]: 1) Defining Research Questions Systematic review starts with defining research questions to reflect the goals of the systematic Literature Review [15]. 2) Define Search Strategy The search strategy helps to identify and formulate the search strings to identify the primary studies and search string can be formulated by using population, intervention, comparison and outcome technique [21, 25]. 3) Study selection criteria and procedure This step is used to determine the studies to be included or excluded from a systematic literature review [15]. 4) Pilot selection procedure As there are two researchers in this study, this pilot selection procedure helps the researchers to develop the common understanding between the researchers [15]. Here the authors apply the inclusion/exclusion criteria by taking the set of the studies from the database. 5) Study Quality Assessment The purpose of Study Quality Assessment is to assess the quality of selected papers [15]. 6) Data Extraction Strategy The Data extraction strategy is conducted to extract all the relevant information necessary to address research questions and for data synthesis [15]. 7) Data Synthesis Strategy In this step the data is synthesized by collecting and summarizing the results obtained from primary studies [15].

Page 20: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

18

3 RESEARCH DESIGN

Definition of Research Questions

S.L.R step 1.) SLR Questions

S.M Step 1.)Systematic Mapping Questions

S.M step 2.) Conduct Search (All studies)

S.L.R step 2.) Define Search Strategy

S.L.R step 3.) Pilot selection procedure

S.L.R step 4.) Study selection criteria

S.M step 3.) Pilot selection procedure

S.M step 4.) Screening of Studies

S.M step 5.) Key wording using the abstracts

S.M step 6.) Data extraction and Mapping process

S.L.R step 5.) Study Quality Assessment

S.L.R step 6.) & S.L.R step 7.) Data extraction & Synthesis strategy

Systematic mapping results and Systematic Literature review results

WEB SURVEY (Web survey is performed

based on the input from S.L.R results)

Survey Results and Analysis

Determine the level of understanding between the researchers

Applying detailed Inclusion and exclusion Criteria

The quality of empirical evidence will be checked

All the data gathered from step 4.) Will be extracted, analyzed and summarized.

Determine the level of understanding between the researchers.

Questionnaire design Pilot

Survey

Applying detailed Inclusion and exclusion Criteria

A classification scheme is developed by using keywords

All the data is extracted and mapped based on various aspects of AST

Figure 5: Research Design

Page 21: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

19

3.1 Systematic mapping design This section gives a detailed description of the systematic mapping design. The structure of this section starts with Section 3.1.1, which contains research questions of this study. Section 3.1.2 represents the search string formation and execution of queries. Section 3.1.3 describes the pilot selection procedure used for this study and Section 3.1.4 describes the screening of AST studies. The formation of classification scheme is shown in the Section 3.1.5 and the data extraction procedure is and systematic mapping of studies is shown in the Section 3.1.6.

3.1.1 S.M Step 1.) Definition of Research Questions (Review Scope)

The research questions are formulated in such a way that they satisfy the goals of the systematic mapping of the study [21, 25]. Table 1: Description of Research questions for systematic mapping

RQ.NO Systematic Mapping Research Questions

Motivation

RQ 1 What types of contributions (Model, Method, Tool etc.,) are presented in the selected studies?

To know about frequencies of contributions made to identify research gaps and be able to classify according to contributions

RQ 1.1 What are the different aspects of the automated software testing reported in the literature?

To identify and classify different aspects (purpose of automation, research types,) of automated software testing reported in the literature.

RQ 1.2 What type of studies with respect to Technology (programming/language/platforms) are discussed in the selected studies?

To identify and classify the automated software testing literature with respect to technology and to get the overview of the studies (programming language/platforms)

RQ 1.3 What is the frequency of the selected studies over the time?

To plot the frequencies of studies to get the overview of the studies over the time

RQ 1.4 What is the frequency of the research types over the time?

To see how research emphasis on AST evolved.

3.1.2 S.M Step 2.) Conduct search (All studies) The search queries are mainly intended to identify the primary studies and search string can be formulated by using population, intervention, comparison and outcome technique [16, 21, 25].

Page 22: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

20

The search strings are applied to the following databases IEEE, Engineering village, Scopus, ACM and Google scholar.

Search process is done by using the following steps

A. The main keywords were found by implementing population, intervention, comparison

and outcome. B. More keywords are populated by finding the synonyms of the keywords which were

recognized in the first step and keywords are also identified by going through the title, abstract and index terms from the AST articles we already have.

C. Finally the search strings are framed by using Boolean operators such as ―AND‖ and

―OR‖. D. Snow ball sampling approach is also used to find relevant articles. STEP A:

The main keywords were found by implementing population and intervention, comparison and outcome. POPULATION: In this paper the population is Automated Software Testing .To search for population we used the keywords ―Automated software testing‖ and ―Software test

automation‖.

INTERVENTION: In this paper intervention is methods, techniques, approaches and tools in Automated Software Testing.

CONTEXT: In this paper the context is Industrial or empirical studies

OUTCOME: The outcome is the different aspects related to AST such as tools, techniques methods and approaches of AST. In this search, we also used the keywords like case study, Industrial and practical. Table 2: Description of Population and Intervention technique based on Research questions for systematic mapping. RQ.NO POPULATION INTERVENTION OUTCOME

RQ1 Automated Software Testing

AST approaches, techniques, tools and methods

A bubble graph showing different contributions related to 227 articles..

RQ1.1 Automated Software Testing

AST approaches, techniques, tools and methods

A graph and table showing s different aspects of AST.

RQ1.2 Automated Software Testing

AST approaches, techniques, tools and methods

A graph and table showing frequency of studies over the time.

Page 23: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

21

RQ1.3 Automated Software Testing

AST approaches, techniques, tools and methods

A graph and table showing frequency of studies related to technology used.

RQ1.4 Automated Software Testing

AST approaches, techniques, tools and methods

A graph and table showing frequency of studies with respect to research type.

STEP B:

More Keywords are populated by finding the Synonyms of the keywords which were recognized in the first step and keywords are also identified by going through the title, abstract and index terms from the AST articles we already have.

The final lists of keywords, which are necessary for our research, are

Table 3: Final list of keywords for performing systematic mapping study SET A SET B SET C SET D

1 Software Tool* Test* Empirical

2 Application Automat* Quality assurance

Industrial

3 Program Validation Practical

4 Develop* Verification Case Study

5

Survey

6

Experience*

7

Experiment*

STEP C:

The following is the search string developed from the above keywords (Set A) AND (SET B) AND (SET C) AND (SET C) AND (SET D)

STEP D:

Snowball sampling is used to find more articles by going through the references of the articles we already have.

Page 24: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

22

3.1.3 S.M Step 3.) Pilot selection procedure (Determine level of

interpretation) A pilot selection is performed before the actual inclusion/exclusion procedure on a set of 50 articles. The two researchers applied the developed inclusion/exclusion criteria on the set of 50 articles individually. During this process several conflicts were found between the researchers regarding the selection of articles. Later discussions were made between the researchers and clarifications were made based on the suggestions from supervisor. Later on, the selected studies are again distributed equally between the researchers and Cohen‘s kappa value is calculated to determine the level of understanding between the researchers. The value we got for kappa analysis inter-rater agreement is 0.605, which shows the levels of understanding between the researchers are in good agreement. The values calculated for kappa analysis can be seen in the Appendix A.

3.1.4 S.M Step 4.) Screening of studies (Relevant papers)

Relevant papers are obtained by individually applying inclusion criteria and non-relevant papers are excluded by individually applying exclusion criteria. When there was disagreement between the two researchers regarding the papers. Those papers are read carefully and discussion was made in making selection. This step excludes the studies which are not relevant to answer the research questions [16]. Table 4 gives the detail description of inclusion and exclusion criteria. Inclusion criteria:

Table 4: Detailed inclusion criteria for selecting primary studies NO

Consideration Criteria

1 Main criteria on overall

- English

- Date of publication between 1999 to 2011

- Paper published in journal/conference/workshop proceedings

- Full text

- Non-duplicate

- The article should relate to the automation of software testing.

2 Title and

abstract

- Contains search words

- Has empirical background

- Focuses on application of automation methods, tools,

Page 25: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

23

techniques and approaches of AST.

3 Introduction and

Conclusion

- Contains empirical background.

- Focuses on application of automation methods, tools, techniques and approaches of AST.

4 Full text - Presence of the empirical data in the paper.

- Studies that present any type of evidence or evaluation related to AST.

Exclusion Criteria: Table 5: Detailed exclusion criteria for selecting primary studies NO.

Criteria

1 -The studies containing keywords but are not in the domain of software engineering, will be excluded.

-Any study available in language other than English will be excluded.

-Any study that does not reflect any research type will be excluded.

-Studies not available in full text will be excluded.

3.1.5 S.M Step 5.) Key wording using abstracts (Developing

classification scheme) According to [16], key wording is done to reduce the time needed for developing the classification scheme and it can be done in two ways. First, to go through the abstracts of the relevant papers, identify context of the research area and to define categories by combining set of keywords [16]. Secondly, if the abstract is not giving enough information regarding the context of research area, introduction and conclusion part is reviewed to prepare the final set of keywords by developing set of keywords for systematic mapping [21, 25]. STEP .1)

In the step 1, a set of keywords from the seminal research papers related to AST are identified. The seminal research papers for research are [1,2,3,4,6,7,10,13,28,29]. The above referenced literature is considered to prepare an initial classification scheme. It should also be noted that this initial scheme evolves when using more and more articles

STEP .2)

In the step 2, each researcher goes through the abstract to select keywords and understand the paper to know if it‘s related to AST. Furthermore, if the abstract is not helping to give overview of the study, then each researcher read the introduction and conclusion. Based on

Page 26: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

24

the papers, different codes were extracted related to AST, which helped derive different aspects of AST.

3.1.6 S.M Step 6.) Data Extraction & Mapping Process (Systematic

Map) After developing the classification scheme, the studies which are identified in SM step 4 (Relevant papers) are categorized based on classification scheme, Later, a bubble graph is drawn to show the systematic mapping of studies [21, 25]. Table 6: Data Extraction categories for selected systematic mapping studies Extraction category / Contributions

Aspects within the categories

1. Research type Validation Research: Techniques investigated are novel and have not yet been implemented in practice. Techniques used are for example experiments, i.e., work done in the lab [25, SM3, SM 4]. Evaluation Research: Techniques are implemented in practice and an evaluation of the technique is conducted. That means, it is shown how the technique is implemented in practice (solution implementation) and what are the consequences of the implementation in terms of benefits and drawbacks (implementation evaluation). This also includes identifying problems in industry [25, SM1, SM2]. Solution Proposal: A solution for a problem is proposed, the solution can be either novel or a significant extension of an existing technique. The potential benefits and the applicability of the solution is shown by a small example or a good line of argumentation. Philosophical Papers: These papers sketch a new way of looking at existing things by structuring the field in form of a taxonomy or conceptual framework [25, SM27, SM37]. Opinion Papers: These papers express the personal opinion of somebody whether a certain technique is good or bad, or how things should been done. They do not rely on related work and research methodologies [25, SM37, SM47]. Experience Papers: Experience papers explain on what and how something has been done in practice. It has to be the personal experience of the author[25, SM 09, SM16].

2. Purpose of Automation

We divided this category into six types based on article from Torkar [3].

1. Test generation and selection(TCS)

Test generation and selection has two subareas in it which include Automated test data generation and selection (ATCGS) & Automated test case generation and selection (ATCGS).Test generation and selection are based on the studies which contains techniques, approaches to generate and manipulate the test data automatically. Test case generation and selection are based on the study which contains techniques, approaches to generate test cases automatically. 2. Test execution and Result collection (TERC)

The papers which concentrate on the execution of test cases (test data) and

Page 27: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

25

collection of test data are considered in this category. In this category, we come across the tools, techniques framework related to the execution of the test cases. 3. Result evaluation and Test quality analysis (REQA)

The papers which concentrate on the evaluation of the test data and analysis of test data quality are considered in this category. In this category, we come across the tools and techniques for examining test quality with respect to test coverage, software test data adequacy criteria and fault detection techniques. 4. TCS,TERC and REQA This category is the combination of the categories A,B and C. 5. TCS and TERC This category is the combination of the CATEGORY A and B.

6. TERC and REQA This category is the combination of the CATEGORY B and C.

3. Testing level:

The testing levels considered based on the testing type found in the studies Unit testing: Unit testing is the basic level testing in order to find the errors in the software program. In Unit testing the software is divided into small units to find the errors in the software program [6]. System testing: During this phase, developers test the system’s functionality and stability as well as non-functional requirements such as performance and reliability [6]. Regression testing: Regression testing means execution of some tests subset that has already been conducted, after application’s code has been modified, in order to verify that it still functions correctly[39] Performance testing: performance testing aimed at verifying that the software meets the specified performance requirements [66]. Functional testing: Functional testing focuses on aspects surrounding the correct implementation of functional requirements. This is commonly referred to as black-box testing, meaning that it does not require knowledge of the underlying implementation [40].

4 Technology used This category is mainly related to the different programming languages and different Interfaces. In this category we found 21 different languages interfaces they include Java, Ada, C++, TTCN, C, UNIX, Scripting languages, UML, SQL, XML, small talk, .Net, Fortan, COBOL, sulu, petrinet, perl, Lustre, Lotos, Phython, IF language

4.1 Language and platform used

C and java: The studies which focus on the c and java platforms for developing a AST technique or tool are considered in this category[SM 41, SM 21]. Ada: Ada is a Structured Programming language. This language is used to generate the code which is used to test the correctness and performance of the

Page 28: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

26

implemented application [6]. TTCN: TTCN is a Programming Language. It is defined as Testing and Test control Notation. There are different test suites like TTCN-1, TTCN -2, and TTCN-3. These test suites consists of many test cases which is written in TTCN language [3]. UML: UML is Unified Modelling Language. It is a semi formal language which consists of different models and use case scenarios. The testing techniques are applied on this models and scenarios [12]. XML: XML is Extensible Markup Language. It consists of scripts which are easy to edit. These scripts are written to exercise and verify the functionality as required [100]. SQL: SQL is Structured Query Language. It is used to manage data base systems. In our articles tests are generated for sql queries where each generated tests covers program path [71]. The other aspects which we included in this catedory are small talk, UNIX .net, Fortran, COBOL, sulu, LOTOS, Phython, IF language, Petrinet, Perl . These interfaces are used for different purposes.

5. Contributions in AST Process/Approach: It refers to the description of activities, roles, responsibilities, actions and their workflow in a systematic way [21]. Method/technique: It refers to the description of rules and regulations that how tasks should be done [21]. Model: It refers to the description of the factual omitting details associated with high level formality [21]. Tool: It refers to the software tool that is used to solve specific problems in each VBSE process and sub process area [21]. Framework: It refers to the structuring, planning, managing, and controlling the processes that is used to develop a sub system or system. For example [SM19, SM72]. Recommendations: It refers to the guidelines, practices, factors, criteria, theory, Suggestions with respect to comparison of factors, metrics, models, methods etc. For example, [SM05, SM09]

3.2 Systematic Literature Review design This Section is intended to give a detail description of the SLR design. The overview of research questions is presented in the Section 3.2.1. Section 3.2.2 describes the search strategy of this review. Pilot selection procedure and selection of articles is presented in the Sections 3.2.3 and 3.2.4. Quality assessment procedure, Data extraction strategy and Data synthesis strategy are presented in the Sections 3.2.5, 3.2.6 and 3.2.7. There are many research studies conducted in the field of AST. Our systematic Literature review aim is to find the challenges and benefits of AST. The systematic review consists of the following steps [15, 16]

Page 29: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

27

3.2.1 S.L.R step 1.) Defining Research Questions

The research questions for the SLR are presented in the below table

Table 7: Research questions for SLR RQ NO. Research Questions Motivation

RQ NO 1 What are the reported benefits of automated software testing?

The benefits of Automated Software Testing reported in the literature are identified

RQ NO 2 What are the reported challenges of automated software testing?

The Challenges of Automated Software Testing reported in the literature are identified.

3.2.2 S.L.R step 2.) Define Search Strategy The search strategy helps to identify and formulate the search strings to identify the primary studies and search string can be formulated by using population, intervention, comparison and outcome technique [21, 25]. The search for identifying the studies related to AST was already done in systematic mapping. Since this step already done in systematic mapping it is not repeated.

3.2.3 S.L.R step 3.) Pilot selection procedure The pilot selection procedure was already done during the systematic mapping, but again another phase of pilot selection is conducted to interpret the level of understanding between the researchers before doing the selection. This pilot selection is meant to select the studies which are already selected for Systematic mapping. We performed kappa analysis by taking sample of 50 articles and we obtained the value of 0.651, which shows good understanding between the researchers.

3.2.4 S.L.R step 4.) Study selection criteria The study selection criteria for systematic literature review are based on studies selected from systematic mapping. Systematic mapping results in extracting the studies related to the automated software testing. We need to find the state of evidence regarding the benefits and challenges of automated software testing. As we already have articles related to AST, it‘s

easier to find the studies related to benefits and challenges of AST within the selected studies from Systematic mapping. The study selection criterion for SLR is based on the inclusion/exclusion criteria in the Table 8 and 9.

Page 30: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

28

Table 8: Inclusion criteria for Systematic Literature review NO

Consideration Criteria

1 Main criteria on overall

- Peer-reviewed papers, which include the case study, survey and experience report from the practitioners about automated software testing. - The article should have empirical background.

Table 9: Exclusion criteria for Systematic Literature review NO

Consideration Criteria

1 Main criteria on overall

- Any study that does not reflect to research type having empirical background.

- The articles not having information related to benefits and challenges of AST.

3.2.5 S.L.R step 5.) Study Quality Assessment The quality is assessed by using the Table 10, to check the empirical evidence Table 10: Quality assessment criteria Criteria Yes/No/Partial

Is the aim of the study clearly stated?

Does the research methodology appropriate for problem under concern?

Are the challenges or Benefits in relation to AST are discussed in the paper?

Are the findings from the study clearly stated?

Page 31: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

29

3.2.6 S.L.R step 6) Data Extraction Strategy The Data extraction strategy is conducted to extract all the relevant information necessary to address research questions and for data synthesis [15]. For Data extraction strategy, we performed thematic analysis in combination with narrative summaries [69]. The description of the analysis of data used for Systematic literature review is presented in the results section. Table 11 is also used to extract the relevant information.

Table 11: Data Extraction Strategy

Category Description

Title Title of the published paper

Author Name of the Authors

Year Research articles published between 1999 – 2011

Subject of Investigation Either the empirical study was industrial based or students were used as subjects.

Research Methodology used in primary study Experiment ,case study , Experience and survey

Relevant area of research study Challenges of Automated Software Testing

Benefits of Automated Software Testing

Identified

Problems/Risks/Challenges

Collected AST benefits and challenges are stated in the paper

3.2.7 S.L.R step 7) Data Synthesis Strategy In this step the data is synthesized by collecting and summarizing the results obtained from primary studies [15]. All the data extracted from the studies are gathered and analysed to answer the research question of the SLR. After summarizing the results of the selected studies, they are presented in the form of charts and graphs. A narrative summary of the benefits and challenges of AST are also presented [23]. The Table 12 is used to summarize the data from the selected studies. Table 12: Data Synthesis Strategy Article Name Article

reference Research Methodology

Year Challenges /Benefits related to AST

SLR 1

SLR 2

SLR 3

.

.

.

.

1999-2011

Page 32: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

30

.

SLR 4

Page 33: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

31

4 RESULTS AND ANALYSIS

4.1 SYSTEMATIC MAPPING The sections below describe the results obtained by performing systematic mapping and also the analysis is presented on the results.

4.1.1 Primary studies selection In order to find the primary studies, the search string is executed in different databases taking the formatting requirements of the database into the consideration. The search string selected for each database and the results obtained for each database is given in the Table 13.

Table 13: Execution of search queries on different databases Database Search string Number of hits

IEEE (("software development" OR "software engineering" OR "software" OR "application" OR "program" OR "develop*") AND ("automat*" OR "tool") AND ("test*" OR "verification" OR "validation" OR "quality assurance") AND ("empirical" OR "practical" OR "experiment*" OR "case study" OR "industrial" OR "survey" OR "experience*"))

10200

Inspec (({software development} OR {software engineering} OR {software} OR {application} OR {program} OR {develop*} ) AND ({automat*} OR {tool} ) AND ({test*} OR {verification} OR {validation}OR {quality assurance}) AND ({empirical} OR {practical} OR {experiment*} OR {case study} OR {industrial} OR {survey} OR {experience*})wnAB)

3668

Compendex (({software development} OR {software engineering} OR {software} OR {application} OR {program} OR {develop*} ) AND ({automat*} OR {tool} ) AND ({test*} OR {verification} OR {validation}OR {quality assurance}) AND ({empirical} OR {practical} OR {experiment*} OR {case study} OR {industrial} OR {survey} OR {experience*})wnAB)

2610

Scopus TITLE-ABS-KEY((({software development} OR {software engineering} OR {software} OR {application} OR {program} OR {develop*}) AND ({automat*} OR {tool}) AND ({test*} OR {verification} OR {validation}OR {quality assurance}) AND ({empirical} OR {practical} OR {experiment*} OR {case study} OR {industrial} OR {survey} OR {experience*}))) AND (LIMIT-TO(PUBYEAR, 2011) OR LIMITTO(PUBYEAR, 2010) OR LIMITTO(PUBYEAR, 2009) OR LIMITTO(PUBYEAR, 2008) OR LIMITTO(PUBYEAR, 2007) OR LIMITTO(PUBYEAR, 2006) OR LIMIT-TO(PUBYEAR, 2005) OR LIMIT-TO(PUBYEAR, 2004) OR LIMITTO(PUBYEAR, 2003) OR LIMITTO(PUBYEAR, 2002) OR LIMITTO(PUBYEAR, 2011) OR LIMITTO(PUBYEAR, 2010) OR LIMITTO(PUBYEAR, 2009) OR LIMITTO(PUBYEAR, 2008) OR LIMITTO(PUBYEAR, 2007) OR LIMITTO(PUBYEAR, 2006) OR LIMITTO(PUBYEAR, 2005) OR LIMITTO(PUBYEAR, 2004) OR LIMITTO(PUBYEAR, 2003) O

1100

Page 34: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

32

R LIMITTO(PUBYEAR, 2002) OR LIMITTO(PUBYEAR, 2001) OR LIMITTO(PUBYEAR, 2000) OR LIMITTO(PUBYEAR, 1999)) AND (LIMIT-TO(LANGUAGE, "English"))

ACM (Abstract:(software) and (automat*) and (test*)) (Abstract:(software) and (tool*) and (verification or validation))

6259

Springer Link ab:((automat or tool) and (test or "quality assurance") and (software)) ab:((automat or tool) and ("verification" or "validation") and ("software"))

869

Total number of Articles 24706

The selection of primary studies for Systematic mapping studies is given below.

We totally obtained 24,706 studies by making use of search strings in different databases. We found that 4786 studies are duplicates and therefore these were excluded. In addition to that, based on the title and also based on the abstracts we eliminated 9222 studies respectively. After the exclusion process, we totally obtained 227 articles as primary studies and Figure 6 gives the clear overview on the selection of the papers for SM. The results of the systematic mapping studies are presented based on the SM research questions and also brief analysis is presented on the results. In addition to that, a bubble graph is drawn based on the different contributions of Automated Software Testing (AST).

Page 35: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

33

IEEEEng.Village SpringerACM

Scopus

Duplicates exclusion resulted in

19920 articles.

4786 duplicates removed

Pre selection based on titles

resulted in 9456 articles

Pre selection based on abstracts

resulted in 1470 articles

7986 articles removed

1236 articles removed

Total selected studies related to

Automated Software testing

24706

Final Selection based on

introduction and conclusion of

the articles resulted in 227

articles

Figure 6: Selection of primary studies for Systematic mapping

SM-RQ1 what types of contributions are presented in the selected

studies?

To answer this question, we plotted the bubble graphs and tables to categorize the different contributions within AST. The Bubble graphs presented below helps to find the research gaps with respect to various contributions within AST. Figure 7 is the bubble graph with X- axis coordinates containing research method used and the number of years and y-axis coordinates as Contributions in AST.

Page 36: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

34

Figure 7 clearly shows that the most of AST studies are concerned with Tools followed by Process/approach and Method, whereas there were not many studies focusing on Recommendation/practice, Framework and Model. Regarding research types, more studies are from Validation research (33.93%), Evaluation research (24.23%), and solution research (17.18%) whereas very less from Philosophical research, Opinion report and experience papers. Since the Automated Software Testing is a mature area there are more studies on Validation research, Evaluation research and solution research. Coming to the number of years, the highest number of studies is recorded in the year 2009 with 35 articles and less number of studies is recorded in the year 2000 and 2011 with 3 and 4 articles. Regarding the research gaps, we can see that years 2000, 2004 and 2011 observed more research gaps with respect to various contributions of AST. In the aspect of research type, research gaps are observed in the experience papers and less number of studies is recorded in the research type‘s, opinion report and philosophical

research.

19992000

20012002

20032004

20052006

20072008

20092010

2011Evaluation research

Validation research

Solution research

Philosophical research

Opinion report

Experience Paper

Process

Framework

Tool

Method

Model

Recommendation/

Pracice

5 2 4 4 2 2

1 8 1 1 2

6 21 5 2 1

23 24 18 3 5 10

3 5 1 6 2 4

17 17 11 6 4 1

Contributions

4 3 2 5 5 10 4 8 2 13 1

1 1 2 2 4 4 3 3 2

4 2 3 2 57 11 710 1116 2

2 1 3 1 1 3 37 46

1 2 1 3 7

2 1 2 3 3 1 1 2 1

Years Research type

1 1

3

5.73% 1.33% 4.41% 3.09% 3.97% 4.85% 8.38%11.89% 13.28% 12.34% 15.42%13.66% 1.77% 24.23% 33.93% 17.18% 10.13%7.05% 7.48%

Figure 7: Overall Bubble plot for AST studies

Page 37: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

35

SM-RQ1.1 what are the different aspects of AST within the selected

studies? Totally, we have classified five different aspects of AST within selected studies; they include Purpose of automation, interfaces used, testing levels and research type used. Figure 8 gives the overview of the classification in AST. In the following paragraphs we explained different aspects of AST with help of graphs and tables.

Automated Software Testing

Testing levelsTechnology

used

Unit testing

System testing

Regression

testing

Performance

testing

Functional

Testing

Integration

Testing

Acceptance

testing

ALL

21 sub aspects

were found

mostly related

to

programming

languages

used.

Research

Type

Evaluation

research

Validation

research

Solution

research

Philosophical

research

Opinion report

Experience

Paper

Purpose of

Automation

TCS

TERC

REQA

TCS,TERC and

REQA

TCS and TERC

TERC and

REQA

Figure 8: Classification of AST Purpose of Automation: For the aspect purpose of automation, we used the Torkar‘s [3] paper to categorize sub-aspects from the purpose of automation, they include

a) CATEGORY A: Test generation and selection (TCS) Test generation and selection has two subareas in it which include Automated test data generation and selection (ATCGS) & Automated test case generation and selection (ATCGS).Test generation and selection are based on the studies which contains techniques, approaches to generate and manipulate the test data automatically. Test case generation and selection are based on the study which contains techniques, approaches to generate test cases automatically.

Page 38: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

36

b) CATEGORY B: Test execution and Result collection (TERC) The papers which concentrate on the execution of test cases (test data) and collection of test data are considered in this category. In this category, we come across the tools, techniques framework related to the execution of the test cases.

c) CATEGORY C: Result evaluation and Test quality analysis(REQA) The papers which concentrate on the evaluation of the test data and analysis of test data quality are considered in this category. In this category, we come across the tools and techniques for examining test quality with respect to test coverage, software test data adequacy criteria and fault detection techniques.

d) CATEGORY D: a), b) & c). This category is the combination of the categories A,B and C.

e) CATEGORY E: a) & b) This category is the combination of the CATEGORY A and B.

f) CATEGORY F: b) & c) This category is the combination of the CATEGORY B and C.

By analysis of Table 14, we can conclude that majority of papers in this aspect are related to Test generation and selection (TCS) with 102 papers. This accounts for nearly half of the papers related to this aspect. Result evaluation and Test quality analysis (REQA) has 46 papers and followed by CATEGORY D: TCS, TERC and REQA with 33 papers, CATEGORY E: TCS and TERC with papers and very less papers from CATEGORY B AND F. The distribution of papers shows the researchers interest in the Test generation and selection aspect. Table 14: Categorization of studies based on purpose of automation

CATEGORY Purpose of automation

Reference number of the article Count

A Test generation and selection(TCS)

SM2,SM4,SM5,SM6,SM7,SM11,SM13,SM16,SM18,SM26,SM2

8,SM37,SM40,SM41,SM45,SM46,SM48,SM50,SM51,SM52,S

M53,SM54,SM56,SM59,SM60,SM61,SM62,SM63,SM64,SM65

,SM66,SM68,SM71,SM74,SM77,SM78,SM79,SM81,SM84,SM

85,SM87,SM89,SM90,SM93,SM94,SM95,SM96,SM99,SM100,

SM101,

SM222,SM224,SM225,SM227,SM105,SM110,SM111,SM115,

SM117,SM118,SM119,SM121,SM122,SM126,SM128,SM129,

SM141,SM142,SM144,SM147,SM148,SM149,SM154,SM155,

SM161,SM162,SM164,SM166,SM171,SM172,SM173,SM185,

SM189,SM190,SM191,SM193,SM194,SM195,SM197,SM198,

SM199,SM201,SM204,SM210,SM211,SM212,SM213,SM214,

SM215,SM216,SM219,SM220.

102

B Test execution and Result

SM15,SM21,SM25,SM27,SM35,SM36,SM39,SM75,SM

102,SM103,SM106,SM139,SM146,SM153,SM187,SM205,SM

17

Page 39: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

37

collection (TERC)

208

C Result evaluation and Test quality analysis (REQA)

SM1,SM8,SM12,SM14,SM22,SM32,SM42,SM43,SM47,SM55,

SM57,SM70,SM80,SM83,SM86,SM88,SM98,SM104,SM107,S

M123,SM127,SM131,SM132,SM133,SM134,SM136,SM137,S

M143,SM145,SM160,SM176,SM

177,SM178,SM179,SM180,SM181,SM183,SM196,SM203,SM

206,SM207,SM209,SM218,SM221,SM223,SM226

46

D TCS,TERC and REQA

SM3,SM10,SM23,SM24,SM30,SM31,SM33,SM38,SM44,SM58

,SM82,SM91,SM92,SM97,SM109,SM121,SM124,SM130,SM1

35,SM140,SM151,SM152,SM158,SM159,SM168,SM169,SM1

74,SM175,SM186,SM187,SM200,SM202,SM217

34

E TCS and TERC SM9,SM17,SM19,SM20,SM49,SM67,SM72,SM73,SM76,SM10

8,SM112,SM113,SM114,SM116,SM120,SM125,SM138,SM15

0,SM156,SM163,SM165,SM170,SM184,SM188,SM192

25

F TERC and REQA

SM29,SM34,SM157, 3

Testing levels: Out of 227 studies selected for SM, we found eight different categories in the aspect testing levels. The testing levels include unit testing, system testing, performance testing, functional testing, integration testing, acceptance testing and other category we include is ‗ALL‘ which

means these papers are related to all testing types. Majority of the papers are based on the unit testing level and system testing level, we totally found 99 articles related to unit testing level aspect , 54 articles related to system testing level aspect and very few studies related to acceptance and integration tests. Considerable numbers of papers are recorded in functional, performance and regression testing level types. Figure 9 clearly shows that unit testing levels dominates with majority of the articles and we can also conclude that most of the researches are interested in the unit testing level rather than other testing levels. Here, there is need to fill the gap by having more studies on the other testing levels. A table is also presented in the APPENDIX F which gives details about the papers related to the testing levels from the selected studies.

Page 40: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

38

Figure 9: Frequency of papers based on the testing levels Technology used: This aspect covers different languages, platforms or interfaces used in the selected papers related to AST. We are successful in dividing 21 sub aspects in this area. These 21 sub aspects are different languages or interfaces used in automating the testing process. The 21 sub aspects include java, C, C++, scripting languages, UML, TTCN, and few articles can be observed from Ada, Unix, SQL, XML, Smalltalk, .Net, FORTRAN,COBOL, Sulu, Petrinet, Perl, Lustre, Lotus, Phython and IF language. Table 15 gives the overview of the 21 sub aspects selected for the aspect technology used. We can observe that java (75 papers) dominates majority of papers and followed by C programming language (36 papers). Considerable amount of papers can be observed from the different sub aspects like C++, Scripting languages and UML. Very few papers are recorded in the remaining 16 sub aspects.

99

54

13

14

21

7

2

11

0 20 40 60 80 100

Unit testing

System level

Regression testing

Performance testing

Functional Testing

Integration testing

Acceptance testing

ALL

No. of Reseach papers based on Testing level

No. of Reseach papers based on Testing level

Page 41: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

39

Table 15: Categorization of studies based on the aspect technology used.

Language Articles Count

Java SM2,SM3,SM5,SM7,SM9,SM11,SM14,SM15,SM17,SM22,SM23,SM26,SM27,SM29,SM30 ,SM33,SM41,SM44,SM45,SM47, SM48,SM49,SM51,SM54,SM55,SM57,SM60,SM62,SM64,SM66,SM76,SM82,SM83, SM84,SM85,SM86,SM87,SM89, SM97,SM98,SM99,SM106,SM108,SM112,SM113,SM115,SM120,SM125,SM126,SM127, SM128,SM129,SM137,SM142, SM146,SM158,SM161,SM167,SM173,SM180,SM188,SM189,SM191,SM192,SM193, SM194,SM196,SM197,SM198,SM201,SM202,SM211,SM218, SM224,SM226

75

Ada SM6,SM181 2

TTCN SM12,SM20,SM59,SM74,SM78,SM92,SM94,SM136,SM140,SM155,SM178,SM179, SM182,SM183,SM187,SM208

16

C SM13,SM16,SM18,SM19,SM34,SM46,SM61,SM68,SM73,SM79,SM101,SM102,SM103, SM105,SM107,SM109,SM122,SM124,SM130,SM133,SM139,SM143, SM149,SM157,SM160,SM165,SM166,SM172,SM174,SM177,SM184,SM213,SM215, SM217,SM227

36

C++ SM8,SM21,SM24,SM28,SM36,SM40,SM77,SM104,SM145,SM152,SM175,SM186, SM190,SM199,SM200,SM203,SM205,SM207,SM214,SM216,SM219

21

Unix SM25,SM31,SM135 3

Scripting SM32,SM35,SM38,SM39,SM42,SM43,SM53,SM56,SM69,SM72,SM75,SM81,SM91,SM93, SM131,SM132,SM159,SM168,SM176,SM209,SM221

21

UML SM4,SM10,SM37,SM52,SM58,SM65,SM95,SM110,SM114,SM121,SM134,SM135, SM138,SM144,SM147,SM154,SM162,SM185,SM195,SM210,SM223,SM225

22

SQL SM71,SM150,SM171 3

XML SM1,SM50,SM100,SM111,SM116,SM119,SM212,SM220,SM222 9

Smalltalk SM151 1

.Net SM148,SM163 2

Fortran SM67,SM164 2

COBOL SM70 1

Sulu SM88 1

Petri net SM156 1

Perl SM90,SM153,SM169 3

Lustre SM63,SM80,SM96,SM170,SM204 5

LOTOS SM206 1

Phython SM123 1

IF language SM141 1

Page 42: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

40

Research types: This categorization of the studies based on six research types. The basis for categorizing this aspect is inspired from the studies [21, 25]. If you could observe the Table 16 majority of the studies are based on the validation research followed by evaluation research and considerable number of studies are reported in the aspects solution research and philosophical research with very less studies related to opinion report. Table 16: Categorization of studies based on Research type Research type Articles Count

Evaluation research

SM1,SM2,SM5,SM12,SM20,SM32,SM33,SM35,SM39,SM62,SM69,SM70,SM74,SM75,SM90,SM100,SM101,SM108,SM114,SM115,SM116,SM117,SM136,SM145,SM148,SM149,SM150,SM155,SM157,SM160,SM178,SM179,SM181,SM182,SM193,SM195,SM198,SM199,SM205,SM206,SM207,SM210,SM211,SM217,SM220,SM223,SM224,SM227,SM76,SM78,SM107,SM124,SM143,SM177,SM215.

55

Validation research

SM3,SM4,SM6,SM7,SM8,SM9,SM10,SM14,SM15,SM17,SM18,SM19,SM21,SM22,SM23,SM24,SM25,SM26,SM28,SM29,SM30,SM31,SM34,SM36,SM38,SM42,SM43,SM44,SM45,SM49,SM50,SM56,SM57,SM60,SM61,SM64,SM66,SM67,SM68,SM72,SM82,SM85,SM86,SM87,SM88,SM91,SM96,SM105,SM109,SM111,SM112,SM118,SM121,SM123,SM127,SM128,SM129,SM130,SM134,SM137,SM140,SM141,SM142,SM144,SM147,SM152,SM154,SM156,SM158,SM161,SM162,SM164,SM166,SM168,SM169,SM172,SM173,SM186,SM189,SM197.

77

Solution research

SM27,SM37,SM40,SM41,SM46,SM51,SM52,SM53,SM55,SM58,SM59,SM92,SM104,SM110,SM120,SM122,SM132,SM133,SM139,SM142,SM146,SM153,SM159,SM167,SM179,SM183,SM184,SM185,SM191,SM202,SM209,SM213,SM214,SM216,SM218,SM222,SM226,SM77,SM89.

39

Philosophical research

SM11,SM81,SM84,SM95,SM98,SM99,SM102,SM103,SM106,SM112,SM119,SM125,SM131,SM163,SM165,SM175,SM187,SM194,SM196,SM201,SM204,SM208,SM212.

23

Opinion report

SM13,SM47,SM54,SM63,SM71,SM73,SM113,SM138,SM190,SM192,SM200,SM203,SM225,SM79,SM83,SM215

16

Experience Papers

SM09,SM16,SM48,SM65,SM97,SM116,SM126,SM135,SM151,SM174,SM176,SM188,SM219,SM221,SM93,SM94,SM80

17

4.1.2 SM-RQ1.2 what type of studies with respect to Technology

(programming language/platforms/interface) are discussed in

the selected studies? In this aspect we focussed mainly on the languages or the interfaces used. In the 227 studies related to SM, we totally found that 21 different languages or interfaces such as C, C++, Java, Ada, FORTRAN.,etc.,, Figure 10 clearly shows that java language dominated with large number of articles, followed by C, C++, scripting languages, UML, TTCN, and few

Page 43: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

41

articles can be observed from Ada, Unix, SQL, XML, Smalltalk, .Net, FORTRAN,COBOL, Sulu, Petrinet, Perl, Lustre, Lotus and Phython. This categorization helped us to find the research gaps in this aspect in the form of lack of studies in various areas in this sub aspect, but the papers in this aspect with 21 different types of languages or interfaces shows that maturity of the area AST.

Figure 10: Categorization of articles based on the technology

4.1.3 SM-RQ1.3 what is the frequency of the selected studies over the

time?

Figure 11 shows the frequency of the studies over the period of the years 1999-2011. The graph can be used to extract data such as number of articles over the time. The more number of studies related to AST can be seen in the years 2009, 2010, 2007 and 2008 .Very less studies related to AST can be seen in the years 1999-2004 and 2011. The highest number of studies is recorded in the year 2009 with 35 articles and less number of studies is recorded in the year 2000 and 2011 with 3 and 4 articles. From the Figure 8, we can observe the studies on AST increased year by year by over the time except in the year 2011 the studies decreased. The fact being still the year 2011 is not completed and we selected studies only until July there can be more increase in the studies.

75

2

16

36

21

3

21 22

3

9

1 2 2 1 1 1 3 51 1 1

0

10

20

30

40

50

60

70

80

Java

Ad

a

TTC

N C

C++

Un

ix

Scri

pti

ng

lan

gugu

es

UM

L

SQL

XM

L

Smal

ltal

k

.Net

Fort

ran

CO

BO

L

Sulu

Pet

ri n

et

Per

l

Lust

re

LOTO

S

Ph

yth

on

IF la

ngu

age

Technology

Technology

Page 44: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

42

Figure 11: Research types over the years 1999-2011

4.1.4 SM-RQ1.4 which research types are used in the selected

studies?

The results of the categorization of research types and the frequency of the research types over the time are presented in the Figure 12. The results show that majority of the studies are based on the validation research and evaluation research. The results clearly show the maturity of the area. Most of the researchers have focused on the validation of the different AST approaches, tools, methods and techniques which allows further improvements in the field. Considerable number of paper are published related to philosophical and solution research which are very much useful for the area of AST. Very less number of studies are reported based on experience and opinion report. The distribution of the studies related to research types shows the need for conducting more research on philosophical and opinion reports by using rigorous research methods to establish a good foundation in the area of AST. More details about the distribution of studies are presented in the APPENDIX E.

0

5

10

15

20

25

30

35

13

3

10

79

11

19

27

3028

35

31

4

No. of articles over the time

No. of articles over the time

Page 45: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

43

19992000

20012002

20032004

20052006

20072008

20092010

2011

Experience

paper

Opinion

report

Philosophical

research

Solution

research

Validation

research

Evaluation

Research

Research types

3

3

3

2

2

1

1

1

5

1

1

1

1

1

1

3

1

1

1

3

4

1

1

2

4

2

1

2

8

3

4

2

1

1

8

6

6

2

2

3

4

11

7

3

5

2

13

6

2

4

1

2

11

12

4

5

2

1

9

9

6

3

3

1

1

2

1

Years

Figure 12: Categorization of AST studies based on the Research types

Page 46: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

44

4.2 Systematic Literature Review

Study Quality assessment

Data Extraction strategy

Data synthesis strategy

`

S

L

R

S

T

U

D

I

E

S

Defination of Research Questions

Define search strategy

Study selection criteria and procedure

Figure 13: Description of Systematic literature review

Page 47: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

45

In this chapter we present the results and analysis of 26 papers finally selected for SLR. After performing SLR, it is understood that there are very few articles related to AST challenges and benefits. While performing SLR, it was difficult to analyze context of the article to find out the benefits and challenges. AST has lot of aspects through which it‘s

difficult to find out the real benefits of the AST. The results and analysis part are documented in the following order; first the type of research study selected for SLR will be presented with the focus on publication years and reported benefits and challenges of AST are presented with detailed analysis on the different types of challenges and benefits of AST.

4.2.1 SLR results overview: As shown in the Figure 14, totally 26 articles are finalized to perform SLR. Figure 14 shows the categorization of articles based on the type of research methodology. A total of 11 articles belong to experiment, 7 articles belong to case study and 8 studies belonged to experience study.

Figure 14: Pie chart showing the number of articles selected for conducting SLR

Table 17: Research methodology used and the number of the articles Research Methodology Article number Number of articles

Experiment SLR 25, SLR 13, SLR 9, SLR 8, SLR 22, SLR 19, SLR 17, SLR 16, SLR 4, SLR 2

11

Case Study SLR 20,SLR 26, SLR 24,,SLR 18,SLR 14,SLR 10,SLR 7,SLR 5

7

Experience SLR 26,SLR 23,SLR 21,SLR 15,SLR 12,SLR 11,SLR 6,SLR 1

8

Table 17 clearly shows that the majority of the studies are experimental studies, following with experience studies and case studies. Table 18 shows the frequency of the selected articles for SLR over the time period 1999-2011.The information in the Table 18 shows that

11

7

8

Research Methodology

Experiment

Case Study

Experience

Page 48: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

46

studies are evenly reported in the some years, while the year 2001 reported no studies related to benefits and challenges of AST.

Table 18: Frequency of SLR studies over the period of the time Year of the Article No. of articles

1999 4 2000 1 2001 0 2002 2 2003 1 2004 2 2005 1 2006 2 2007 2 2008 5 2009 4 2010 2

4.2.2 Data analysis of SLR

The data analysis used for SLR is mainly based on the thematic analysis combined with narrative summaries. The nature of SLR in our thesis is such that thematic analysis combined with narrative summaries helps us to give the results in a qualitative manner based on each SLR study [69]. The main motivation for us to conduct this data analysis is based in the study [69]. For the data analysis, the data is extracted from each of the primary studies related to SM i.e., 227 studies based on the predefined data extraction form we developed. This form helped us to extract the full details of the studies related to SLR. We gathered all the data which we got from the data extraction form and we individually copied the introduction, aim, conclusion and discussion part of each paper selected for review and noted down in excel sheet. Then later the data of each article is reviewed and any of the articles having the information related to benefits and challenges of AST is considered. However, we found that some articles lack the details related to benefits and challenges of AST. Finally the studies having the qualitative descriptions are narratively summarized by giving clear explanation of each benefit and challenge extracted from SLR. During the synthesis process, we found it difficult to understand the benefits and challenges of AST. The reason being AST is a mature area and the benefits and challenges extracted are difficult to understand as the studies clearly lack the good description of the findings. The two researchers involved in the SLR process used the data extraction method and developed different codes to categorize the articles. Table 19 gives the data synthesis strategy we used to extract the data from SLR studies.

Page 49: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

47

Table 19: SLR data synthesis Article name Article

number Research methodology

Year BENEFIT [B]/CHALLENGE[C]

M. Fewster and D. Graham, “Software Test Automation:

Effective Use of Test Execution Tools”. 1999.

SLR 1 Case study 1999 [C]Failure to achieve expected

goals

Choi, K.C. and G.H. Lee. Automatic test approach of web application for security in ICCSA 2006: International

Conference on Computational Science and Its Applications, Glasgow, United kingdom: Springer

Verlag,2006.

SLR 2 Experiment 2006 [C]Low human involvement

Tom Wissink, Carlos Amaro, "Successful Test

Automation for Software Maintenance," icsm,

pp.265-266, 22nd IEEE International Conference on

Software

SLR 3 Experiment 2006 [B]Reduces testing time

Burnim, J. and K. Sen, Heuristics for Scalable

Dynamic Test Generation, in Proceedings of the 2008

23rd IEEE/ACM International Conference on

Automated Software Engineering, IEEE Computer Society. p. 443-446, 2008.

SLR 4 Experiment 2008 [B]Better test coverage

K.Karhu, et.al, “Empirical Observations on Software Testing Automation”, Lappeenranta University of Technology, International Conference on Software Testing Verification and Validation, 2009.

SLR 5 Case study 2009 [B] Higher product

quality

[B] Better Test

Coverage

[B] Reduces

Testing Time

[B] Reusability of

Tests

[C] Difficulty in

maintenance of

test automation

Page 50: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

48

C. Persson and N. Yilmaztürk, "Establishment of Automated Regression Testing at ABB: Industrial Experience Report on ‘Avoiding the Pitfalls’," in 19th International Conference on Automated Software Engineering (ASE’04): IEEE Computer Society, 2004.

SLR6 Experience study

2004 [C]Failure to achieve expected

goals

Lijun, S. and Z. Hong, Generating structurally complex test cases by data mutation: a case study of testing an automated modeling tool. Computer Journal, 2009. 52(Copyright 2009, The Institution of Engineering and Technology): p. 571-88.

SLR 7 Case study 2009 [B]Reduction of Cost

Alshraideh, M. (2008). A complete automation of unit testing for JavaScript program

SLR 8 Experiment 2008 [B]Reduction of Cost

Roy Patrick Tan and Stephen Edward 2008,

Evaluating Automated Unit Testing in Sulu.

SLR 9 Experiment 2008 [B] Better test

coverage

Bashir, M. F. and S. H. K. Banuri (2008), Automated model based software Test Data Generation System

SLR 10 Case study 2008 [C] Process of test Automation

needs more time to mature.

Berner, S., R. Weber, et al. (2005), Observations and

lessons learned from automated testing.

SLR 11 Experience and case study

2005 [C] False

expectations

[C] Difficulty in

maintenance of

test automation

[C]Inappropriate

test automation

strategy

[C]Less Human

Page 51: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

49

Effort .

Coelho, R., E. Cirilo, et al. (2007). JAT: a test

automation framework for multi-agent systems

SLR 12 Experience 2007 [B] Increase in

confidence

[B] High fault

detection

Dallal, J. A. (2009), Automation of object-oriented framework application testing.

SLR 13 Experiment 2009 [B] Reusability of

the tests

Dan, H., Z. Lu, et al. (2009). Test-data generation

guided by static defect detection

SLR 14 Case study 2009 [B] High fault detection

Fecko, M. A. and C. M. Lott (2002), Lessons learned from automating tests for an operations support system.

SLR 15 Experience 2002 [B] High fault

detection

[C] Lack of skilled people for test

automation

Kansomkeat, S. and W. Rivepiboon (2003),

Automated-generating test case using UML state chart

diagrams.

SLR 16 Experiment 2003 [B] High fault

detection

Leitner, A., H. Ciupa, et al. (2007). Reconciling manual and automated testing: The

AutoTest experience.

SLR 17 Experiment 2007 [C]Less Human Effort

Liu, C. (2000). Platform-independent and tool-neutral test descriptions for automated software testing.

SLR 18 Case Study 2000 [C] Difficulty in

maintenance of

test automation

Malekzadeh, M. and R. N. Ainon (2010, An automatic

test case generator for testing safety-critical

software systems.

SLR 19 Experiment, Case study

2010 [B] Higher product

quality

M. Fewster and D. Graham, “Software Test Automation:

Effective Use of Test Execution Tools”. 1999

SLR 20 Case study 1999 [C] Inappropriate

test automation

strategy

[C] Difficulty in

maintenance of

Page 52: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

50

test automation

[C] False

expectations

[B] Reusability of

the tests

[B] Reliability

[B] Reduces

testing time

B. Pettichord, “Seven Steps to Test Automation

Success”, STAR West, San Jose, NV, USA, November

1999

SLR 21 Experience 1999 [B] False Expectations

Bousquet, L. d. and N. Zuanon (1999). An

Overview of Lutess: A Specification-Based Tool for

Testing Synchronous Software.

SLR 22 Experiment& Case Study

1999 [B] Less human

effort

[B] Reduces

testing time

P.Pocatilu, “Automated Software Testing Process

Economic Informatics Department”, Academy of

Economic Studies, Bucharest, 2002

SLR 23 Experience 2002 [B] Less human

effort

Børge Haugset, Geir Kjetil Hanssen,, Automated Acceptance Testing:a Literature Review and an Industrial Case Study.2008

SLR 24 Case Study 2008 [B] Increase in confidence

Saglietti, F. and F. Pinte, Automated unit and integration testing for component-based software systems, in Proceedings of the International Workshop on Security and Dependability for Resource Constrained Embedded Systems. 2010, ACM: Vienna, Austria. p. 1-6.

SLR 25 Experiment 2010 B] High fault detection

Rice.Randell, “Surviving the SLR 26 Experience and 2004 [C] Lack of skilled

Page 53: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

51

Top 10 Challenges of Software Test Automation” Chicago Quality Assurance Association. May 18, 2004.

case study test automation

4.2.2.1 SLR-RQ1 what are the reported benefits of automated software testing?

Table 20: SLR studies related to benefits of the Automated Software Testing BENEFIT ARTICLE NUMBER

Higher product quality SLR 5,SLR 19

Better test Coverage

SLR 4,SLR 5,SLR 8,SLR 9,SLR 19,SLR 25

Reduces testing time SLR 3,SLR 5,SLR 20,SLR 22

Reliability SLR 20

Increase in confidence SLR 24,SLR 20

Reusability of tests SLR 5,SLR 13

Less human effort SLR 11,SLR 13,SLR 17,SLR 22,SLR 23

Reduction of cost SLR 8, SLR 13, SLR 7

Fault detection SLR 12,SLR 14,SLR 15,SLR 16,SLR 25,SLR 7

Benefits:

Reduces testing time:

The important benefit of AST is its ability to run test faster in less testing time [2]. This benefit has been cited in the articles [43, 12, 2, 44]. Tom et al.,[43] in their paper showed that the Keyword-based approach to AST is far better than the capture/playback approach .The Keyword-based approach requires very less programming skills and less maintenance of test scripts which automatically reduces time and cost. K.Karhu et al.[12], performed case studies and in one of the observations it is found that the main benefits of AST include improvement of quality by better testing coverage and more testing can be done in less time. According to Fewster et al. [2] for certain tasks AST can be done more efficient than the manual testing. Author also listed many benefits such as run existing (regression) tests, run more tests more often etc., Higher product quality:

Software testing is the process of identifying the correctness, completeness, security and quality of the developed software [40]. AST is as same as software testing as applied to any developed software, but to some extent the process should be automated. K.Karhu, et al. [12] and Malekzadeh et al. [45] in their papers showed that higher product quality can be achieved through AST. K.Karhu, et al. [12] performed case studies and observed that the quality of the product can be improved by better testing coverage. Malekzadeh et al. [45]developed an easy-to-use automated test case generator tool (ATCG) especially for safety-critical software systems which is capable of producing the quality test cases and it also ignores the redundant test cases.

Page 54: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

52

Better test coverage:

The test coverage is a method to cover the program with test cases that satisfy some fixed coverage criteria such as Statement coverage, Branch coverage and path coverage [42]. Burnim, J et al. [46] in their paper presented some search heuristic strategies for program under test and they implemented these strategies by using the open source test tool called CREST. The presented search heuristic strategies achieved considerable results with greater test coverage. Patrick et al. [47] in their paper used Sulu language which facilitates the integration of automated unit testing tools for programs written in this language. Patrick et al. [52] conducted the experiment by using the test case generation algorithm for a family of test suites and the results of experiment showed high code coverage, including 90% statement coverage and high mutation coverage for the most comprehensive test suite generated. Reliability:

According to Fewster et al. [2], testing done manually is more flexible, based on the changes made the tester can vary the comparisons each time the tests are performed. Even though only actual outcome is compared to the expected outcome by automation, which may not be the correct outcome.AST is more reliable and faster than manual comparison. Increase in Confidence:

The important benefit of AST is to run more tests in less time and this will automatically allow running it more often. This will lead to building confidence in the system [2]. This benefit is cited in the papers [48, 49, 2]. Børge et al. [49] presented a case study from industry on the use of Automated Acceptance Testing (AAT) to show that some of benefits proposed are realistic and need some improvements to get full benefit of it. Author also observed that the Automated Acceptance Testing helps to increase the level of confidence among the developers. High Fault detection:

This benefit is cited in the articles [49, 48, 50, 51, 52, 53]. Lijun, S et al. [49] presented an approach called data mutation for generating a large number of test data from a few seed test cases which is inspired by mutation testing. By using this approach authors conducted a case study and also experiment is conducted, the results of which demonstrate that the proposed approach is cost effective and has the ability to detect a large portion of faults. Coelho et al., [48] proposed the framework called JAT for building and running Multi Agent Systems (MAS). Author used JAT on three different systems Multi Agent Systems (MAS) and observed that JAT can be used to build test scenarios that can achieve high fault detection. Dan et al. [51] in their paper proposed a test-data generation technique guided by static defect detection and a case study is performed to evaluate the effectiveness of the approach. Authors also make comparison with another test generation tool, Junit factory. After comparison the results show that the proposed approach generates fewer test data than Junit tool and are capable of high fault reveal ability. Fecko et al. [52] presented experiences gained in experience gained in automating tests for an operations support system and proposed the observations about benefits and pitfalls of test automation, recommendations for maximizing return on investment. According to the Fecko et al. [52] the investment in GUI based facilities leads to test scripts that are reusable, maintainable and portable which can then improve customer satisfaction by detecting more defects than is possible through

Page 55: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

53

manual testing. Kansomkea et al. [53], proposed the automated testing technique which can generate test cases from UML chart diagrams created by Rational rose tool. The test case in this technique are measured based on the fault detection capability. The author conducted sample experiments and the results show high effectiveness of the generated test case which means high fault detection. Saglietti et al. [54] proposed automated techniques supporting the verification and validation of component-based software systems by optimizing test case generation for both unit and integration testing. An experiment is also conducted by using these techniques, the results of which are of significant value, they support the high fault detection and better testing coverage.

Reusability of the tests:

The effective test strategy specifies the way in which the test design is approached. When tests are designed for maintainability and reusability defects can be repeatedly identified [13]. K.Karhu, et al. [12] conducted a case study and has presented observations on Software Test Automation. One of their important observations include about reusability in the test automation. The authors observed that the High reusability facilitates and low reusability hinders testing automation. In the case study they performed the test manager said that “it is

always expensive to set up an automated system. The price may be tenfold compared to one

test.--- But later, if there is more repetition, the unit cost per test decreases quite

significantly at some point.”

Less Human Effort:

Leitner et al. [55] in their article say that both automated and manual approaches are complimentary; if both approaches can be used the full benefit of both approaches can be used. In [55] proposed a tool called Auto test which allows it to combine the benefits of both approaches. Leitner et al. [55] also presented the difference between both the approaches and in one of their observations they concluded that Automated testing requires less effort on the developer's side, but it cannot fully replace manual unit Testing as the developers are good at setting the complex input data. Bousquet et al. [44] proposed Lutess a automation testing tool and illustrated its use by taking a practical example. They showed that by using lutes automated testing tool a great deal of human effort can be saved. They observed that the human efforts can be used to more defect prevention tasks rather than the classical testing chores (profitably selecting the data, determining the result validity). Pocatilu [55] discussed about the automation of software testing, the different type of tools used for automated software testing and costs implied for AST. He proposed that the manual testing involves a lot of effort, measured in person per month. Using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease. Reduction of Cost:

Alshraideh [56] presented an automatic test data generation tool that aims to completely automate unit testing of JavaScript functions. The author conducted the experiment by using the proposed tool and obtained results for better branch coverage. This tool enables the complete automation which reduces the cost of software testing.

Page 56: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

54

4.2.2.2 SLR-R2 what are the reported challenges of automated software

testing?

Table 21: SLR studies related to Challenges of the Automated Software Testing Challenge Article no

Automated software testing cannot fully replace manual testing

SLR 5, SLR 11

Failure to achieve expected goals SLR 1, SLR 6

Difficulty in maintenance of test automation

SLR 5,SLR 11,SLR 18,SLR 20

Process of test automation needs more time to mature.

SLR 10

False expectations

SLR 11,SLR 20, SLR 21

Inappropriate test automation strategy

SLR 11,SLR 20

Lack of skilled people for test automation tools SLR 15

Automated software testing cannot fully replace manual testing:

K.Karhu, et al. [12] performed case studies and observed that there is need for human involvement; some of testing tasks are difficult to automate especially when it requires extensive domain knowledge. Berner et.al. [9] presented observations and lessons learned from five sample projects. In the five projects they observed that automated software testing cannot fully replace manual testing, in three of the projects author observed it extensively and in two projects author observed it partially. Failure to achieve expected goals:

Fewster et al. [2] presented their experiences regarding challenges of AST. According to him, the automation of testing for performing tests in fraction of time has led organizations to use AST. They observed that many organizations have attempted test automation without a clear understanding Of all that is involved. Consequently, many attempts have failed to achieve real or lasting benefits. Persson et al. [67] observed that it is not uncommon to come across real-life stories about failed test automation attempts in the literature. In fact, it is often cited that failure rates for test automation projects are as high as –if not higher than– any other software development projects. Difficulty in maintenance of test automation:

K.Karhu, et al. [12] performed case studies and observed due to constant changes in technology it is necessary to record the changes and becomes difficult in maintenance of test automation. Berner et.al. [9] presented observations and lessons learned from five sample projects. In the three of five projects they observed that maintenance of test automation is very hard and in two projects he didn‘t observed it. Fewster et al. [2] listed the following challenges in the automation of testing

Page 57: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

55

- Failure to achieve expected goals. - Poor testing practice. - False expectations. - False sense of security. - Maintenance of automated tests. - Technical problems. Process of test automation needs more time to mature:

Bashir et al. [57] conducted a survey on code based test data generation techniques and their weaknesses and also a case study is performed. In his case study the authors observed that automating the process of testing can reduce the time and cost of this process, but automating the process of test data generation needs more time to mature. False expectations:

Berner et.al. [9] conducted observations and lessons learned from five sample projects. They observed that many organizations have impractical expectations on AST. Actually, AST is intended to save as much cost as possible, especially spent on ‗unproductive‘ testing

activities, so they expect return on investment on their very first automation. If they didn‘t

get return on investment, then they will stop the automation. Fewster et al. [2] listed out several challenges and benefits regarding AST. In one of the challenges, they said that the management has the false expectation of benefits and success. According to the author, if a test has already run and passed and the same test is run again there is very less scope to find new defects [2].

Inappropriate test automation strategy:

Berner et.al. [9] in one of his observation from five projects, observed that test strategy implemented is often not proper. According to Berner et.al, ―A good automation strategy

therefore combines different approaches for test automation: system test automation,

integration test automation, and unit test automation, which is usually most effective”. In many cases, test code implemented for one approach can be reused for other test types or approaches, thus making a combined strategy even more effective‖. According to Fewster et al. [2], if the expectations on AST are unrealistic or not proper, then how much effective the tool might be, it will never meet expectations.

Lack of skilled people for test automation tools:

According to the Fecko et al. [52], “Test automation requires a mix of expertise: the ability

to use test tools, some software design and development skills, and knowledge of the system

under test”. For an organization to achieve the best result with AST, the employee working on testing should have enough technical skills to achieve the automation goals.

Page 58: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

56

Page 59: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

57

5 SURVEY We have conducted survey to investigate whether the benefits and challenges reported in the literature are prevalent in software industry.

5.1 QUESTIONNAIRE DESIGN

Before going to start the survey we have analyzed the results of systematic literature review thoroughly, which is the main basis for starting the survey. The main aim of the survey is to find out whether the challenges and benefits reported in the systematic literature review are prevalent in the software industry. We have selected www.surveymonkey.com as our survey tool for conducting the survey. This tool allowed us to develop the questionnaire in user friendly manner with many options and an online link was generated by this tool for accessing the questionnaire. We have prepared a questionnaire in a way that gives a good ground for analysis of challenges and benefits of AST. As it is an online survey we try to reach as many professionals as possible. The survey link is distributed to different AST forums, yahoo groups, Google groups, Linked in and to different employees in different organizations through emails, which results getting totally 164 respondents. The survey questionnaire is mainly divided into two parts Demographic Questions- To find the details for the respondent Questions related to benefits and challenges of AST

The demographic information obtained from respondents is used to find the valid responses for the survey. The respondent should have at least 1 year experience to make his response as a valid response and different aspects like their role, project type are also considered. The opinion based questions are used to extract the information regarding the view point of the practitioners to see whether they realize the benefits and challenges reported in the SLR. In addition to that, the questions will also help to analyze the similarities between researchers and practitioners. The questionnaire used for this survey is presented in the appendix.

5.2 PILOT SURVEY As discussed earlier the questionnaire was prepared taking the input from the SLR. The questionnaire was first sent to 10 respondents who are familiar with the field of AST and later we got some suggestions from them to change the format of the questions so that it can be clearly understood by the respondents. Furthermore, our supervisor reviewed the questionnaire and necessary modifications were made.

Page 60: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

58

Input from Systematic Literature Review

Questionnaire modification based on input from pilot survey and

supervisor

Designing Survey Questionnaire

Final Questionnaire

Figure 15: Survey pilot process used

Page 61: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

59

5.3 SURVEY RESULTS AND ANALYSIS

5.3.1 DEMOGRAPHIC QUESTIONS

This section presents survey demographics question and analysis is made to the results. In total we have got 164 respondents from all over the world. The web survey was active from September 20th to October 11th 2011.After analysis of the 164 responses; we have finalized 115 (70.12 %) responses to be valid responses. Only completed responses are collected and analyzed in the following section. It should also be noted that in the following sections every analysis made on the survey is based on 115 valid responses. Figure 16 gives details about the number of respondents based on their role. We can observe that half of the respondents are having their role related to Quality assurance (tester), so the data which we received are mostly based on tester‘s experience. However it is also important

to have input from the respondents having other roles, next majority of the respondents we got is from the programmers and very few from other roles. The information clearly can be seen in the Figure 16.

Figure 16: Percentage of respondents distributed based on the role The majority of survey respondents working on software application domain by their teams are Web applications (26.31%), followed by Finance (14.66%), Healthcare (12.40%), defense (11.65%), ERP applications (10.15%) and telecommunication (7.51%).The percentage of respondents based on the different software application domains are presented in the Table 22.

Page 62: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

60

Table 22: Number of respondents for survey working on different application domains

Application Domain Number of

respondents Percentage of the respondents

Embedded System 13 4.88%

ERP 27 10.15%

Finance 39 14.66%

Healthcare 33 12.40%

Mobile 23 8.64%

Telecommunication 20 7.51%

Web 70 26.31%

Games & Entertainment 7 2.63%

Defense (Military) 31 11.65%

Others 3 1.12%

Out of the 115 respondents, 60 of the respondents have less than five years of experience related to AST, 33 of them have experience between 5 to 9, 18 of them have experience between 10 to 15 and four respondents have experience more than 16 years. The information can be seen in the Table 23.

Table 23: Respondents working experience Criteria for experience Number of

respondents

Less than five years 60

Between 5 to 9 33

Between 10 to 15 18

More than 16 04

5.3.2 Results and Analysis of questions related to benefits of AST This section presents the results and analysis of the survey based on the benefits of AST. As discussed earlier all the survey questions are prepared based on the input from the SLR. The questions used in the survey are labeled (SQ1, SQ2 and so on) in this document for explanation purpose. It should be noted that the every question asked to the respondent not only has four options based on likert‘s scale but also a option for free text qualitative

answers is also provided to get input from respondent for each question. SQ 3 Automated Software Testing provides more confidence in the quality of

the product and increases the ability to meet schedules.

Nearly half of the respondents (44%) agree with the benefit that Automated Software Testing

provides more confidence in the quality of the product and increases the ability to meet

schedules and very less (14%) disagree with this challenge. The results show that 17% are

Page 63: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

61

uncertain about this challenge, that makes a considerable portion of respondents still may not have achieved this benefit. However, 66% completely agree and agree to this benefit making it as prevalent benefit in software industry. Figure 17 gives the clear details of the survey result. Earlier in the literature review, Børge et al., [49] conducted case study and observed that automated acceptance testing helps in building confidence among the developers. The following are some comments from the respondents. According to three respondents, the confidence to meet the schedule is totally dependent on the way you plan and execute it. They say that

“It can, but only if done well”

“Requires a lot of preparation and appropriate

usage”

“It depends on how good the automated software is”

We can also see that for achieving confidence good planning is required and everything should go well according to plan.

Figure 17: Automated Software Testing provides more confidence in the quality of the product and increases the ability to meet schedules SQ 4 Automated testing can improve the product quality by better test coverage

75% of the respondents choose completely agree and agree option for the benefit Automated

testing can improve the product quality by better test coverage and very less disagree with this. It is also observed that 18% are uncertain to this benefit. The results show that the practitioners are well aware of the benefit. Earlier in the SLR, K.Karhu, et al., [12] performed case studies and observed that the quality of the product can be improved by better testing coverage. According to one respondent, quality can be improved by sufficient coverage not better coverage. He says that

“At times I want to drop most of my automated tests when they became a too heavy burden.

They don't improve the quality by better coverage, but by sufficient coverage, I get the

experience that I'm working on the right track”.

Another respondent pointed out that, he agrees only to a certain level, he says that

Page 64: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

62

“I agree to a certain level. Over time you may get more coverage of the system, but that is a

long term objective. You want automation to help you with the tasks/tests that give the most

benefit and in a repeatable fashion”.

Another respondent pointed out that, coverage criteria must predetermined earlier, he says that

“If you don't predetermine what you need to cover you can write 1000 automated scripts and

still only cover 10%. While maybe 20 proper automated tests can achieve the same

coverage”.

Figure 18: Automated testing can improve the product quality by better test coverage

SQ 5 High reusability of the tests makes automated testing productive

86% of the respondents choose completely agree and agree option for the benefit high

reusability of the tests makes automated testing productive and very small amount disagree with this benefit. As more number of respondents are favorable to this benefit, we can consider it as most achieved benefit, however 11% are uncertain about this benefit and only 3% disagree to this benefit. In the qualitative free text answers, explanations and recommendations were provided, we got one comment from the respondent. According to him, reusability can lead to gain, but only if it is initially planned well.

Reusability of code and the subsequent tests is a goal. It can lead to productivity gains for

the team and project. But if it is initially not done right then it won't allow for increased

productivity.

As the majority of the respondents agree with this benefit, we can conclude reusability can achieve the productivity gains.

Page 65: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

63

Figure 19: High reusability of the tests makes automated testing productive

SQ 7 By having a complete automation it reduces the cost of software testing

dramatically and also facilitates continuous testing.

64% of the respondents chose the options agree and completely agree making this well-known benefit. It is also observed that 15 % are uncertain about this benefit. The reason for uncertainty might be that some practitioners might not have achieved this benefit. According to one respondent, complete automation is not possible and manual testing can never be replaced, he says that

“First off, you will never have "complete" automation. Also, there can never be a

replacement for manual testing. It can reduce the cost of software testing by catching bugs

earlier, but testing should and most commonly does still take place”.

Another respondent also has the same explanation, he says that

Automation will always have an associated cost, how it is managed is what is important.

Also, you cannot automate everything (complete). Automate what makes sense and yes, if

properly implemented automation (different types like Unit, Smoke) can be part of

continuous integration/testing. But the automated tests need to be designed and managed

(organized) to work within that capacity.

Page 66: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

64

Figure 20: By having a complete automation it reduces the cost of software testing dramatically and also facilitates continuous testing. SQ 9 Automated software testing saves time and cost as it can be re-run again and

again and they are much quicker than manual testing.

72% of the respondents are very well aware of this benefit, where as 13 % disagree with this challenge and 13 % are uncertain. This result show that less number of respondents are not aware of this challenge or do not agree with this challenge. Earlier in the SLR , K.Karhu, et al.,[12], performed case studies and in one of the observations it is found that the main benefits of AST include more testing can be done in less time. According to Fewster et al., [2] for certain tasks AST can be done more efficient than the manual testing. According to one respondent, for saving time and cost re-running won‘t be that much helpful, because

every time the same bugs are not found and it can increase time and cost, he says that

“If you're dependent purely on re-running the exact same tests every time, then unless your

developers carefully only introduce the exact same bugs, you may find that you unfortunately

have more cost and time in production. You need a combination”.

According to other respondent,

“Depends on the life span of the product and the focus of the testing”.

The other respondent pointed out that there are some automated tests which take more time, he says that

“Tests executed several times save time and cost, but some kinds of automated tests take

much more time than manual (especially GUI tests)”.

Page 67: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

65

Figure 21: Automated software testing saves time and cost as it can be re-run again and again and they are much quicker than manual testing with no additional cost

SQ 10 Automated software testing facilitates the high fault detection.

The result of this benefit is contrary to the findings of the SLR. 33% agree and 30% are uncertain about this benefit. Moreover, 25 % do not agree with this benefit. This clearly concludes that the respondents are not aware or not have achieved this benefit. Earlier in the SLR, according to Fecko et al.,[52] the investment in GUI based facilities leads to test scripts that are reusable, maintainable and portable which can then improve customer satisfaction by detecting more defects than is possible through manual testing. Kansomkea et al., [53], proposed the automated testing technique which can generate test cases from UML chart diagrams created by rational rose tool. The test case in this technique are measured based on the fault detection capability. The author conducted sample experiments and the results show high effectiveness of the generated test case which means high fault detection.

Below are some of the comments given by the respondents, According to one respondent, it is totally dependent on how the tester creates the tests, he says that,

“The fact that the testing is automated does not increase the fault detection rate. It is the

tester creating the tests which facilitates high fault detection. You can get high fault

detection rates with manual testing too...and low detection rates with automated testing

software....it depends on how it is used”.

According to other respondents, manual testing is good in detecting more faults, they say that

“Comparing to manual it's less”.

“Depends on the focus and quality of your test automation”.

It should also be noted that in case of complex tests, manual testing gives more fruitful results, because tester can analyze the complex data to find the defects.

Page 68: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

66

Figure 22: Automated software testing facilitates the high fault detection

SQ 12 Automated software testing enables the repeatability of tests, which gives

the possibility to do more tests in less time

Totally 86 % agree with this benefit and only 4 % percent disagree with this benefit. 8% are uncertain about this benefit. Highest numbers of respondents are in the favor of this benefit making this most achieved and well known benefit. Earlier in this SLR, according to Fewster et al., [2] for certain tasks AST can be done more efficient than the manual testing. Author also listed many benefits such as run existing (regression) tests, run more tests more often etc., .According to one respondent, always repeatability should not be the goal , he says that “Repeatability is good, but doing more tests in less time shouldn't be a goal. Rather to do

better testing in the time provided by the project stakeholders”.

Figure 23: Automated software testing enables the repeatability of tests, which gives the possibility to do more tests in less time

Page 69: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

67

5.3.3 Results and Analysis of questions related to challenges of

AST

This section presents the results and analysis of the survey based on the challenges of AST.

SQ 1 Tester should have enough technical skills to build successful automation

The majority of the respondents (81%) choose agree and completely agree option with the challenge that the tester should have enough technical skills to build successful automation and very less disagree with this challenge. The results show that the practitioners are well aware of this challenge related to AST. Figure 24 gives the clear details of the survey results. Earlier in the systematic literature review, it is observed that the organization to achieve best of the AST, the employee working on the testing should have enough testing skills. In the qualitative free text answers, explanations and recommendations were provided by respondents, one person pointed out, due to the importance of skills they need to asses and wisely select the people, he says that “Not all testers have these technical skills... some may be able to learn them and apply, but

that doesn't hold true for all. In any organization, skills need to be assessed and tasks for

automation or testing need to be assigned to those who have the skills... in other words, not

every tester can be an automater”.

Another respondent pointed out that there can be a scenario where very good testers may not have automation skills and it can be compensated by other tester having variety of skills in the team, he says that “Completely agree, and completely disagree. I have met outstanding testers who did not

have automation skills. You do not need each tester on your team to be a clone. You want a

variety of skills in your testers, and the ability to collaborate with others when they need a

skill they are weaker in. So, I would say the test team should have enough skills to build

successful automation. But that is a different thing to asking every tester to have the same

skill set. Maybe your tester without automation skills has spent their time on developing

powerful systems thinking skills instead.”

Another respondent has also the same opinion as the first respondent, he says that

“In some companies this is compensated by programmers and testers working closely

together, or by introducing special test automators, who have a programming background,

and automate the tests from other testers.”

According to the practitioners it‘s clearly understood that these challenge can be mitigated by collaboration with team members as each member in team may have his own skill set which can be utilized based on the requirement. We can also conclude that a tester should have some skill set related to AST.

Page 70: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

68

Figure 24: Tester should have enough technical skills to build successful automation

SQ 2 Automated testing needs extra effort for designing and maintaining test scripts.

More than half of respondents (56%) agree with the challenge that Automated testing needs

extra effort for designing and maintaining and very less disagree with this challenge. The results show that only 5 % disagree and very few completely disagree. This concludes that practitioners realized this challenged reported by researchers. Figure 25 gives the clear details of the survey results. Earlier in the systematic literature review, K.Karhu, et al.,[12] performed case studies and observed due to constant changes in technology it is necessary to record the changes and becomes difficult in maintenance of test automation. Berner et al., [9] presented observations and lessons learned from five sample projects. In the three of five projects he observed that maintenance of test automation is very hard. In the below, we present some important comments from survey respondents for this AST challenge. According to two respondents, test automation requires proper design and maintenance as software development, they say that

“It is software development into itself, without proper design and maintenance it becomes

shelf ware very quickly”.

“Test automation needs at least as much maintenance as the developed software with

regards to the Technical Debt”.

Another respondent pointed out that, it is required to do proper planning very early to overcome the maintenance problem, he says that

“If you plan early you can create manual test scripts that can also be used for automated

testing, hence let time required for maintenance and /or conversion”.

According to the survey result, we can clearly observe that the maintenance effort is definitely required for achieving good results from AST; however it is also observed that proper planning is required.

1% 5%

14%

43%

37%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 71: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

69

Figure 25: Automated testing needs extra effort for designing and maintaining test scripts

SQ6 Compared with manual testing, automated software testing requires a

high investment to buy the tools and train the staff to use the tools.

75% of the respondents choose completely agree and agree option for this challenge and only 15 % disagree with this challenge. The results show that the practitioners are very well aware of this benefit. The following are some comments given by the respondents. According to one respondent, manual testing is also expensive but it is more visible to predetermine the costs associated with it. “I would say completely agree, except that the investment for purchasing automation

software and training is more visible than that for manual. Manual training is still

expensive in terms of time spent by mentors, which is harder to track, and less visible”.

Another respondent pointed out that, AST may be expensive at the beginning but after some time, the cost can be compensated by achieving increased quality and efficiency. He says that

“It will increase cost in the short term and in the up front, but by creating proper automation

up front, those costs are mitigated by increased quality of the end product and increased

efficiency of testing staff where they will not have to invest manual time for tasks that are

automated (such as regression, repetitive data entry work, accurate data entry work, etc)”.

32%

56%

6% 5% 1%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 72: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

70

Figure 26: Compared with manual testing, automated software testing

requires a high investment to buy the tools and train the staff to use the tools

SQ 8 Automated software testing requires less effort on the developer's side,

but cannot find complex bugs as manual software testing does.

59% of the respondents are very well aware of this challenge, where as 21 % disagree with this challenge and 17 % are uncertain. This results show that considerable numbers of respondents are not aware of this challenge or do not agree with this challenge. Earlier in the SLR, Bousquet et al., [44] proposed Lutess an automation testing tool and illustrated its use by taking example. They showed that by using lutes automated testing tool a great deal of human effort can be saved. Pocatilu [55] proposed that the manual testing involves a lot of effort, measured in person per month. Using automated testing, with specific tools, this effort can be dramatically reduced and the costs related with testing can decrease. According to two respondents, automation can find complex bugs if it is designed correctly, they say that

“In fact proper automated testing can find much more complex bugs”.

“I think it is possible, if designed correctly, for automated testing to find just as many

complicated bugs. Again, it all depends upon the implementation”.

Figure 27: Automated software testing requires less effort on the developer's side, but cannot find complex bugs as manual software testing does

27%

48%

10%10% 5%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

16%

43%

17%

21%

3%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 73: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

71

SQ 11 The investment in application-specific test infrastructure, can

significantly reduce the extra effort that test automation requires from

testers

66% of the respondents are very well aware of this challenge, where as 10 % disagree with this challenge and 24% are uncertain. This result show that uncertainty level of this challenge is higher than disagree level, it is understood that quarter of the respondents are not aware of this challenge and 10 % may not have faced this challenge. Earlier in the SLR, Leitner et al., [55] also presented the difference between both the approaches and in one of his observations they concluded that automated testing requires less effort on the developer's side, but it cannot fully replace manual unit Testing as the developers are good at setting the complex input data.

Figure 28: The investment in application-specific test infrastructure, can significantly reduce the extra effort that test automation requires from testers.

SQ 13 compared with manual testing, the cost of automated testing is higher,

especially at the beginning of the automation process. However,

automated software testing can be productive after a period of time

Almost all the respondents agree with this benefit. The result clearly shows that the practitioners are very well aware of this benefit and they have achieved this benefit successfully. Earlier in the SLR, Bashir et al., [58] conducted the case study and observed that automating the process of testing can reduce the time and cost of this process, but automating the process of test data generation needs more time to mature. According to one respondent, AST can be rewarding only when it takes some time, he says that “Manual testing, if repeated over and over, is a very large waste of money. Automated

testing is an investment, and requires only time to be far more rewarding than manual

regression tests”.

11%

55%

24%

10% 0%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 74: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

72

Figure 29: compared with manual testing, the cost of automated testing is higher, especially at the beginning of the automation process. However, automated software testing can be productive after a period of time. SQ 14 Most of the automated testing tools available in the market are

incompatible and does not provide what you need or fits in your

environment.

The results of this challenge are contrary to the findings in the SLR. 35 % agree, but 21% disagree and 26% are uncertain. The respondents are not much aware of this challenge or considerable numbers of respondents have not faced this challenge. Earlier in the SLR, according to Fewster et al. [2] , if the expectations on AST are unrealistic or not proper, then how effective may be the tool used for automation it will never meet expectations. According to two respondents, a right tool can‘t be found based on our requirements, they

say that,

―There are multiple tools out there, yes. But you need the right one for your situation. There

isn't, and never will be a 'one size fits all' tool”.

“There are a large number of tools available for different cost levels, pricing, etc., as well as

different targeted feature sets and capabilities. No tool is going to be good for everyone but

there are many good tools on the market, both free and for cost. A good test manager and

testing group will spend the time to research and select the right tool for the environment”.

According to other respondent, automated tool can be made valuable with the skill of the tester, he says that

―Most "automated test tools" are not as useful as having a skilled tester. It's the application

of the tester's skills that makes a tool valuable, and if the tool prevents the tester from doing

what they want, that tool is less valuable”.

With the input from respondents, we can say that we should make AST tools valuable with the help of skilled testers, even then if the tool does not support the environment then it is considered as a tool that doesn‘t fit to our requirements.

37%

52%

7% 3% 1%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 75: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

73

Figure 30: Most of the automated testing tools available in the market are incompatible and does not provide what you need or fits in your environment.

5.3.4 Comparative analysis of the benefits and challenges obtained

from SLR and SURVEY According to [68], the main aim of comparative analysis is to find the differences and similarities of the research study. Comparative analysis is a qualitative method which was carried out to gain in-depth knowledge about the benefits and challenges of Automated Software Testing (AST). The main aim for performing comparative analysis in this research is to check whether the benefits and challenges found in the literature are prevalent in the industry. Comparative analysis is conducted to compare the results of Systematic literature review and survey. The benefits and challenges found in SLR are compared to the results of survey with the intention of checking whether the practitioners are realizing the benefits and challenges reported by the researchers in the literature. In this section we present the detailed analysis to find whether the benefits and challenges obtained from SLR are prevalent in the industry. This section is also answer to the research question RQ4.

5.3.4.1 RQ 4.)Which of the reported benefits and challenges found in the

literature are prevalent in industry?

Table 24 and 26 contains the information about benefits and challenges of AST obtained by performing SLR and survey. Our main goal is to find the challenges and benefits reported in literature are prevalent in software industry. Table 24 gives the information about the benefits obtained from SLR. In addition to that, survey question used in the survey related to that benefit and the result obtained to that benefit from the survey is presented in the table. The survey questions are labeled as SQ1, SQ2 and so on. The table 24 and 26 gives the very rich information about comparison of results obtained from SLR and survey.

By comparison of the results obtained from survey and SLR, we can observe that most of the benefits reported in the literature are prevalent in the software industry. If you can see the Table 24, the last column results obtained from survey shows the result percentage based on Likert scale. From this we can clearly observe whether the benefits extracted from SLR are realized by the practitioners. The results shows that almost for every benefit more than 65 to 70% percent of respondents choose agree and completely agree option. This clearly shows that the practitioners clearly realize the benefits reported in the literature. The two benefits related to fault detection and confidence are in contrary to the results of SLR. For the benefit

9%

35%26%

21%

9%

RESPONSE COUNT

COMPLETELY AGREE

AGREE

UNCERTAIN

DISAGREE

COMPLETELY DISAGREE

Page 76: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

74

fault detection, 47% are in favor with benefit but 35 % percent are unsure and 29% disagree with this benefit. This shows that practitioners might not have achieved this benefit or not aware of this benefit. For the benefit related to gaining confidence 40% are in favor with this benefit, 19% are uncertain with this benefit and 20% are not in favor of this benefit. Considerable number of respondents not sure about getting this benefit or may not have achieved the benefit.

The other important thing we observed from the survey is that for every benefit 15-25 % of the respondents have chosen uncertain option. The first reason is that for every benefit to be achieved there must be some good strategy, planning and implementation. The test manager should analyze the situation plan the automation based on it. So uncertainty among the respondents might be because the benefit is totally dependent on the strategy you implement for test automation. The second reason might be the respondents might not really have achieved the benefit, but heard or read from books that this benefit is achievable.

Table 24: Comparison of the benefits obtained from SLR and SURVEY BENEFIT EXTRACTED FROM SLR

ARTICLE NUMBER SURVEY QUESTIONS

RESULT OBTAINED FROM SURVEY

Higher product quality

SLR 5,SLR 19 SQ4 Completely agree – 24 %

Agree -51% Uncertain-18% Disagree-6% Completely

Disagree-1%

Reduces testing time SLR 3,SLR 5,SLR 20,SLR 22 SQ 9 Completely agree – 25 %

Agree -47% Uncertain-13% Disagree-13% Completely

Disagree-2%

Increase in confidence

SLR 24,SLR 20 SQ3 Completely agree – 22 %

Agree -44% Uncertain-17% Disagree-14% Completely

Disagree-3%

Reusability of tests SLR 5,SLR 13 SQ5 Completely agree – 40 %

Agree -46% Uncertain-11% Disagree-3% Completely

Disagree-0%

Less human effort SLR 13,SLR 22,SLR 23 SQ8 Completely agree – 16 %

Agree -43% Uncertain-17%

Page 77: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

75

Disagree-21% Completely

Disagree-3%

Reduction of cost SLR 8 SQ7 Completely agree –19 %

Agree -45% Uncertain-15% Disagree-17% Completely

Disagree-4%

Fault detection SLR 12,SLR 14,SLR 15, SLR 16,SLR 25,SLR 7

SQ10 Completely agree –33%

Agree -30% Uncertain-25% Disagree-4% Completely

Disagree-8%

Better test Coverage

SLR 4,SLR 5,SLR 8, SLR 9,SLR 19,SLR 25

SQ4 Completely agree – 24 %

Agree -51% Uncertain-18% Disagree-6% Completely

Disagree-1%

Table 25 is the ranking of the challenges based on the median calculated from the number of the respondents.

Table 25: Rating of the AST benefits based on the respondents from the survey

Benefit SURVEY QUESTIONS MEDIAN RANKING

Higher product quality

SQ4 3.92 2

Reduces testing time

SQ 9 3.73 3

Increase in confidence

SQ3 3.66 5

Reusability of tests SQ5 4.23 1

Less human effort SQ8 3.47 6

Reduction of cost SQ7 3.69 4

Fault detection SQ10 3.16 7

Better test coverage

SQ4 3.92 2

In the earlier paragraphs we have discussed only about the benefits obtained from SLR and survey. We now present analysis part regarding the challenges. By comparison of the results

Page 78: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

76

obtained from survey and SLR, we can say that most of the challenges reported in the literature are prevalent in the software industry. Almost for every challenge 70-90% of the respondents are in agreement with the challenges reported in the SLR. We can also assume that the practitioners are very well aware and have faced these challenges reported in the SLR. But there are two challenges for which the respondents totally are in disagreement. The challenge about the test automation strategy has 24 % disagreement from the respondents and 30% uncertainity. The reason is that the automation strategy is totally dependent on the test manager of the project. When asked ―Does automated software testing fully replace

manual testing”, 80% disagree with this challenge. Table 26 gives the opinion of respondents based on percentage. An analysis about this challenge is also presented in the coming sections.

Table 26: Comparison of the challenges obtained from the SLR and SURVEY CHALLENGE EXTRACTED FROM SLR

ARTICLE NUMBER SURVEY QUESTIONS

RESULT OBTAINED FROM SURVEY

Automated software testing cannot fully replace manual testing

SLR 5, SLR 11 SQ15 Completely agree –1 %

Agree -5% Uncertain-14% Disagree-43% Completely

Disagree-37%

Failure to achieve expected goals

SLR 1 , SLR 6 SQ8 Completely agree –16 %

Agree -43% Uncertain-17% Disagree-21% Completely

Disagree-3%

Difficulty in maintenance of test automation

SLR 5,SLR 11,SLR 20 SQ2 Completely agree –32 %

Agree -56% Uncertain-6% Disagree-5% Completely

Disagree-1%

Process of test automation needs more time to mature.

SLR 10 SQ13 Completely agree –37 %

Agree -52% Uncertain-7% Disagree-3% Completely

Disagree-1%

Inappropriate test automation strategy

SLR 11,SLR 20 SQ14 Completely agree –9 %

Agree -35% Uncertain-26% Disagree-21% Completely

Page 79: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

77

Disagree-9%

Lack of skilled people for test automation tools

SLR 15 SQ1 Completely agree –35 %

Agree -46% Uncertain-10% Disagree-6% Completely

Disagree-3%

Here we present the analysis of final two survey questions SQ 15 and SQ 16 and cross analysis is presented based on the test development approach used and software development method used.

Table 27: Rating of the AST challenges based on the respondents from the survey

Challenge SURVEY QUESTIONS Median Ranking

Automated software testing cannot fully replace manual testing

SQ15 1.53 6

Failure to achieve expected goals

SQ8 3.47 4

Difficulty in maintenance of test automation

SQ2 4.13 2

Process of test automation needs more time to mature.

SQ13 4.20 1

Inappropriate test automation strategy

SQ14 3.15 5

Lack of skilled people for test automation tools

SQ1 4.04 3

SQ 15 Does automated software testing fully replace manual testing?

Almost all the respondents say that they don‘t disagree or completely disagree that AST

completely replaces manual testing. Very few are uncertain about this. Earlier in the SLR, Berner et.al, [9] presented observations and lessons learned from five sample projects. In the five projects he observed that automated software testing cannot fully replace manual testing. Based on some input obtained from the respondents, it can be said AST can never replace manual testing. According to one respondent, complex problems can be dealt only with manual testing, he says that

Page 80: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

78

“A large amount of manual testing is required to deal with new development and to uncover

complex problems”.

According to another respondent to get effective results, both testing approaches AST and MT should be employed. They say that

“Both methods need to be employed. Automated testing is excellent for regression testing

but doesn't replace manual testing for new features”.

According to other respondents, AST can be used as compliment to MT, they say that

“Automated software testing compliments manual testing”.

“NO! It is a tool only and should be used as a complement to manual testing. You automate

to allow your manual testing to work on high value tasks, and not be stuck in the mud with

re-running other mundane tasks. Think of automation as a way to get efficiency gains for

your overall testing effort”.

The result clearly shows that the both approaches (MT and AST) need to be employed to get full benefits.

Figure 31: Results showing for the survey question, ―Does automated software testing fully replace manual testing?‖ Earlier in the survey, in the demographic part respondents were asked to enter the type of testing approach and software development used in their projects. Based on these results we presented the cross analysis. We have selected only SQ 15 and 16 for cross analysis, because if you can observe the results of SQ15 80% of respondents completely disagree and disagree making SQ15 having highest disagreement rate and SQ16 is the last question in survey regarding the satisfaction of the respondents, which is important for the results of the survey. Figure 32 clearly shows that majority of the respondents choose Black box testing as their testing approach with 49% of respondents, followed by white box testing with 27 % and Object Oriented (OO) testing approach and others with percentages 14 and 10. In the others category most of the respondents used gray box - black box with some DB visibility , Specification by example, context-driven testing, An exploratory approach, and whatever techniques are necessary.

Page 81: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

79

Figure 32: Percentage of respondents based on the testing approach Figure 33 clearly shows that majority of the respondents choose Agile as their software development method with 62% of respondents, followed by Waterfall model with 25 % and Lean and others with percentages 3 and 10. In the others category the respondents used v-model, spiral model, Combination of waterfall and agile as their software development methodologies.

Figure 33: Percentage of respondents based on the software development method used. Figure 34 is the cross analysis for SQ 15 based on the software development method used by the respondents. Figure 34 helps to give information about, which option did the respondent choose based upon the software development method. From the bar graph we can see that percentage of respondents related to Agile, waterfall model, Lean and others choose the options of completely agreed, agreed, uncertain, disagree and completely disagree. The percentage of people related to Agile are 0%-8% completely agreed, 8%-15% agreed, 15%-23% uncertain, 23%- 64% disagreed, 64%-100% completely disagreed. The Percentage of respondents related to Waterfall model are 0%-10% agreed, 10%-18% uncertain, 18%- 42% disagreed, 42%-100% completely disagreed. The Percentage of respondents related to Lean are 0%-25% uncertain, 25%-100% disagreed. The Percentage of respondents related to others is 0%-15% agreed, 15%-18% uncertain, 18%- 100% completely disagreed. People who completely agreed for our results are 100 % related to others. Over all we can observe that for each and every development model the options totally differ from each other.

27%

49%

14%

10%

Testing approach

WB

BB

OO

Others

62%

25%

3%10%

S/W development method

Agile

Waterfall

Lean

Other

Page 82: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

80

Figure 34: Cross analysis based on the test approach used for SQ 15.

Figure 34 is the cross analysis for SQ 15 based on the testing approach used by the respondents. Figure 34 helps to give information about, which option did the respondent choose based upon testing approach. From the bar graph we can see that percentage of respondents related to White box testing, Black box testing, OO testing and others choose the options of completely agreed, agreed, uncertain, disagree and completely disagree. The percentage of people related to WB are 0%-8% completely agreed, 8%-15% agreed, 15%-23% uncertain, 23%- 64% disagreed, 64%-100% completely disagreed. The Percentage of respondents related to BB are 0%-10% agreed, 10%-18% uncertain, 18%- 62% disagreed, 62%-100% completely disagreed. The Percentage of respondents related to OO are 0%-17% uncertain, 17%-50% disagreed, 50%-100% completely disagreed. The Percentage of respondents related to others is 0%-15% agreed, 15%-20% uncertain, 20%- 65% disagreed, 65%-100% completely disagreed. Over all we can observe that the Percentage of respondents related to each and every testing approach selected options totally differ from each other.

0% 20% 40% 60% 80% 100%

Agile

Waterfall

Lean

Others

Completely agree

Agree

Uncertain

Disagree

Completely Disagree

Page 83: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

81

Figure 35: Cross analysis based on the software development approach used for SQ

15.

SQ 16 what is your level of satisfaction in software test automation?

Finally, when respondents were asked “what is your level of satisfaction in software test

automation”, 39% are highly satisfied, 45% are just satisfied, 15 % said yet to see real

major benefits of automation and only 1% not all satisfied. The result for this survey question has the answers with highest proportion 45 % are just satisfied and 39% highly satisfied. The results clearly show that the AST is successful in the industry in achieving the expected benefits. We can see that only 1% is not at all satisfied and 15 % still haven‘t

achieved any benefits. However, from earlier discussion we have also noticed that test automation is dependent how you plan and execute it. To give more details into the results, in the next section we will present the cross analysis of results based upon test approach used and software development method with respect to SQ16.

Figure 36: Satisfaction level of respondents on AST

0% 20% 40% 60% 80% 100%

WB

BB

OO

Others

Completely agree

Agree

Uncertain

Disagree

Completely Disagree

0%

5%

10%

15%

20%

25%

30%

35%

40%

45% Highly satisfied; 39%

Just satisfied; 45%

Yet to see real major benefits of test automation;

15%

Not at all satisfied; 1%

RESPONSE PERCENTAGE

Page 84: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

82

Figure 37 is the cross analysis of results based upon test approach used with respect to SQ16. Figure 37 helps to give information about, which option did the respondents choose based upon the testing approach. From the bar graph we can see for all the options (Agree, disagree and so on) how much percentage respondents related to WB, BB, OO and others have selected which option. Figure shows the percentage of people‘s satisfaction based on their

testing methods. The percentage of people related to WB are 0%-45% highly satisfied, 45%-85% just satisfied, 85%-100% yet to see real major benefits of test automation. The Percentage of respondents related to BB are 0%-35% Highly satisfied, 35%-81% just satisfied, 81%- 100% yet to see real major benefits of test automation. The Percentage of respondents related to OO are 0%-42% highly satisfied, 42%-82% just satisfied, 82%-100% yet to see real major benefits of test automation. The Percentage of respondents related to others is 0%-40% highly satisfied, 40%-80% just satisfied, 80%- 90% yet to see real major benefits of test automation, 90%-100% not at all satisfied. Over all we can observe that the Percentage of respondents related to each and every testing approach selected options are totally differ from each other.

Figure 37: Cross analysis based on the test approach used for SQ 16 Figure 38 is the cross analysis of results based upon test approach used with respect to SQ16. Figure 38 helps to give information about, which option did the respondent choose based upon the software development method. From the bar graph we can see for all the options (Agree, disagree and so on) how much percentage respondents related to Agile, waterfall model, Lean and others have selected which option. Overall we can observe that the majority of the respondents belonging to Agile are highly satisfied, satisfied and choose the option yet to see real major benefits of the AST. Figure shows the percentage of people‘s satisfaction based on their software development methods. The percentage of people related to Agile are 0%-45% highly satisfied, 45%-85% just satisfied, 85%-100% yet to see real major benefits of test automation. The Percentage of respondents related to Waterfall model are 0%-58% Highly satisfied, 58%-90% just satisfied, 90%- 100% yet to see real major benefits of test automation. The Percentage of respondents related to Lean are 0%-64% just satisfied, 64%-100% yet to see real major benefits of test automation. The Percentage of respondents related to others is 0%-35% highly satisfied, 35%-56% just satisfied, 56%- 85% yet to see real major benefits of test automation, 85%-100% not at all satisfied. Over all we can observe that the Percentage of

0% 20% 40% 60% 80% 100%

WB

BB

OO

Others

Highly satisfied

Just satisfied

Yet to see real major benefits of test automation

Not at all satisfied

Page 85: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

83

respondents related to each and every development model selected options are totally differ from each other.

Figure 38: Cross analysis based on the software development approach used for SQ

16.

0% 20% 40% 60% 80% 100%

Agile

Waterfall

Lean

Others

Highly satisfied

Just satisfied

Yet to see real major benefits of test automation

Not at all satisfied

Page 86: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

84

6 VALIDITY THREATS

The validity threats related to systematic mapping, systematic literature review and survey are discussed in the below section.

Construct validity

Construct validity assess the use of accurate definition and measures associated with the variables [59].Construct validity threat for Systematic mapping and Systematic review study is to miss any important study due to not forming proper search string. To overcome this threat, we first reviewed all the seminal literature [1,2,3,4,6,7,10,13,28,29] in the AST and Keywords were prepared carefully to perform search in the database. We took utmost care while finalizing the keywords and our supervisor is also consulted while forming a search string. The potential validity threat in this category for survey is risk of selection bias, there are chances of some individuals or groups have more chance to participate in the survey than other groups. The main target of our survey was from testing area, to overcome this threat we posted the web survey link across different forums. By this there are equal chances of getting answers from individuals within a group. Another threat in this category regarding survey is the risk of having answers from respondents not based on experience; to overcome this threat we considered the response to be valid only if the respondent has at least 1 year of experience. There are chances that respondent may not provide true or honest answers, to minimize this threat we provided anonymity so that respondent can freely give his answers. External validity

According to Wohlin [60], the external validity threat is associated to the generalization of the results of a specific study. Researchers believe that external validity threat in this study with respect to systematic mapping and systematic review is reliability. To overcome this threat, the two researcher‘s

effort is involved in this study. In addition to that, piloting of review protocol is done by two researchers. With regard to systematic review the generalization is limited as there were case studies focusing on benefits and challenges, hence they are limited to the context of those case studies, which further underlines the need for this survey External validity threat with respect to survey is answers came from very different domains and companies, as given by demographics. This aids in generalizing the results of the survey. There were very less responses from defense, embedded systems and games & entertainment domains. Generalizability with respect to these domains is very limited. The survey we conducted in unbiased with respect to certain group of people.

Page 87: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

85

Internal Validity Threats

Internal validity threats mainly deals with the issues that are related to design and its execution to avoid systematic errors [16, 59]. The Internal validity threats related to the study are presented in the below section. Automated software testing is a mature area and lot of work has already been done in this area. The main challenge we faced regarding the systematic mapping was to find the primary studies most relevant to automated software testing. To minimize this threat we developed systematic mapping protocols then followed these protocols in a systematic way. These protocols are also verified from the supervisor. We found that AST has many aspects associated with it and it was difficult to categorize all the aspects. To overcome this threat, the two researchers has gone through the abstracts of the relevant papers, identify context of the research area and defined categories by combining set of keywords. If the abstract is not giving enough information regarding the context of research area, introduction and conclusion part is reviewed to prepare the final set of keywords by developing set of keywords for systematic mapping The main threat regarding the systematic literature review is selection of articles, because the articles selected for the SLR are totally based on the studies selected for Systematic mapping studies and no specific search is performed particularly for SLR. Anyway this threat is reduced as the articles selected for systematic mapping is totally related to AST and 24,706 were initially selected and 227 studies are finalized. There are very less chances that we could miss the studies related to SLR in the 227 studies. Another major threat for SLR is very few articles were selected and many were rejected. It was difficult to find the studies related to benefits and challenges of AST. To overcome this challenge the two authors discussed the inclusion and exclusion criteria to avoid misunderstanding. If there is any doubt regarding the selection of article a discussion was made to finalize the article. The internal validity threat related to the survey is the type of the questions used in the survey. We divided the survey questionnaire into two parts, Demographic and main part related to benefits and challenges of AST. Demographic part is prepared to extract the personal information of the respondents, but some fields are made as optional as they may hesitate to give information about their project due to security reasons. Regarding the main part, all the questions are object type and are based on the likert scale, which makes easy for the respondents to complete the survey. Reliability

It is concerned with the replicability of the results. There is chance of bias in selection and interpretation of studies while selecting primary studies for systematic mapping and systematic review, to minimize this threat we performed kappa analysis with two researchers to determine the level of interpretation and understanding. We got the kappa value of .605, which shows that there is good agreement between the researchers. The threat regarding the survey is the formulation of the questions in clear way to avoid misinterpretation of the questionnaire. To minimize this threat, we performed the pilot survey by sending the questionnaire 10 respondents who are very well aware of AST. We checked if they understood the questions correctly and we took inputs from them to modify the questionnaire and supervisor is also contacted to check the questionnaire.

Page 88: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

86

7 CONCLUSION

Systematic Mapping research methodology is used to find different contributions of AST. The results are classified and presented with the help of bubble graphs, bar graphs and tables. Benefits and challenges of AST are explored and empirical evidences are found by using Systematic literature review methodology. Finally, Web survey is conducted to find the benefits and challenges reported in the literature are prevalent in industry.

7.1 Major finding in Systematic mapping The SM resulted in selecting 227 studies related to AST. The 227 studies are used to extract different contribution within AST. The different contributions which we found after performing Systematic mapping studies include Purpose of automation, testing levels, technology used, research type and tools type. The major finding regarding each category is presented below Most of AST studies are concerned with unit testing level followed by system testing

and functional testing, whereas there were not many studies focusing on acceptance testing and Regression testing.

Regarding purpose of automation with respect to research types we divided the

aspects into 6 categories based on its purpose they include CATEGORY A: Test generation and selection (TCS) CATEGORY B: Test execution and Result collection (TERC) CATEGORY C: Result evaluation and Test quality analysis(REQA) CATEGORY D: a), b) & c). This category is the combination of the categories A,B and C. CATEGORY E: a) & b) This category is the combination of the CATEGORY A and B. CATEGORY F: b) & c) This category is the combination of the CATEGORY B and C.

In the aspect of purpose of automation, we found most of the papers are related to

Category A:Test generation and selection, followed by Category C: Test evaluation and quality analysis, the research gap in this aspect can be clearly observed in the Categories B,D,E and F.

Regarding the aspect technology used, we are successful in dividing 21 sub aspects in this area. These 21 sub aspects are different languages or interfaces used in automating the testing process. The 21 sub aspects include java, C, C++, scripting

Page 89: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

87

languages, UML, TTCN, and few articles can be observed from Ada, Unix, SQL, XML, Smalltalk, .Net, FORTRAN,COBOL, Sulu, Petrinet, Perl, Lustre, Lotus, Phython and IF language.

we observed that java (75 papers) dominates majority of papers and followed by C

programming language (36 papers). Considerable amount of papers can be observed from the different sub aspects like C++, Scripting languages and UML. Very fewer papers are recorded in the remaining 16 sub aspects.

Regarding the research types, the majority of the studies related to AST studies are based on Evaluation and validation research which shows that the majority of the studies are of empirically evident. There are very few studies on experience and opinion report with considerable number of studies in solution and philosophical.

7.2 Major findings in Systematic Literature Review The SLR resulted in selection of 26 primary studies .These studies are used to extract the empirical evidence regarding the benefits and challenges of AST. The results regarding the benefits and challenges are presented below The challenges found in the SLR include

Automated software testing cannot fully replace manual testing

Failure to achieve expected goals

Difficulty in maintenance of test automation

Process of test automation needs more time to mature.

False expectations

Inappropriate test automation strategy

Lack of skilled people for test automation tools

Page 90: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

88

The benefits found in the SLR include

BENEFIT

Higher product quality

Better test

coverage

Reduces testing time

Reliability

Increase in confidence

Reusability of tests

Less human effort

Reduction of cost

Fault detection

7.3 Major findings in Survey

The results obtain from SLR are used as input for preparing questionnaire. The aim of the survey was to check the benefits and challenges found in SLR are prevalent in Software industry. We totally got 164 respondents and after validating the survey results, we finalized 115 (70.12%) responses to be valid responses. The web survey results show that almost all the benefits and challenges are prevalent in the industry. Only two benefits are in contrary to results of SLR, the benefits such as fault detection and confidence have more than 50 % disagreement level. The challenge about the appropriate test automation strategy has 24 % disagreement from the respondents and 30% uncertainty. The reason is that the automation strategy is totally dependent on the test manager of the project. Finally we conclude by survey results that the practitioners are achieving the benefits and facing challenges reported in the literature.

7.4 Future work The results of systematic mapping show that there is lack of studies in different areas of the AST. The research gap provided by systematic map gives the future researcher a way to move further in the area of AST. AST is the most advanced area of technology in this decade. Below we present some areas which can help the future researchers to perform the research. There is need for better empirical research, because there is a lack of survey research methodology in the entire studies of AST. There is need to perform more work in the aspect of testing levels, very few studies have been reported in the testing level category other than unit testing level.

Page 91: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

89

AST area lacks studies in the area of web based systems. Both being mature and advanced areas, future research in this area can produce very good results. Systematic literature review was performed to find the empirical evidence regarding the benefits and challenges of AST. There are few studies related to benefits and challenges of AST. There is need to do more studies in this area to find more empirical evidence regarding benefits and challenges of AST. In this thesis, we are successful in finding coverage of the research area AST. We categorized different aspects of AST. In one of the categories, we tried to categorize even AST tools, but that categorization did not help us to find more studies on the AST. We would like to focus more on the AST tools categorization and develop a framework, which can help the organization to choose the tools based on their criteria.

Page 92: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

90

8 REFERENCES

[1] I. Burnstein, Practical Software Testing: process oriented approach, Springer Professional Computing, 2003. [2] M. Fewster and D. Graham, Software Test Automation: Effective Use of Test Execution

Tools, ACM Press/Addison-Wesley Publishing Co., 1999. [3] R.Torkar, ―Towards automated software Testing Techniques, classifications and frame works‖, Blekinge Institute of Technology, 2006. [4] Hetzel, William C., The Complete Guide to Software Testing, 2nd ed. Publication info: Wellesley, Mass: QED Information Sciences, 1988. [5] E. Dijkstra, ―Notes on Structured Programming‖ Technical Report 70-WSK-03, Dept. of Mathematics, Technological University of Eindhoven, Netherlands, April 1970. [6] John Watkins, Testing IT, Cambridge University Press, 2001. [7] Edward Miller, ―Advanced methods in Automated software test‖, Software Maintenance, 1990. [8] Brain Marick, ―When Should a Test Be Automated‖, Software Testing Analysis & Review conference (STAR EAST), Orlando, FL, 1999. [9] S.Berner, R.Weber, and R. K. Keller, ―Observations and lessons learned from automated testing‖. In Proceedings of the 27th international Conference on Software Engineering, 2005. [10] Khaled M. Mustafa et.al, ―Classification of Software Testing Tools Based on the Software Testing Methods‖, second International Conference on Computer and Electrical

Engineering, Al-Zaytoonah University of Jordan, 2009. [11] R. Ramler and K. Wolfmaier, ―Economic Perspectives in Test Automation: Balancing Automated and Manual Testing with Opportunity Cost‖.AST‘06, May 23, 2006, Shanghai, China. [12] K.Karhu, et.al, ―Empirical Observations on Software Testing Automation‖,

Lappeenranta University of Technology, International Conference on Software Testing Verification and Validation, 2009. [13] E. Dustin. Effective Software Testing: 50 Specific Ways to Improve Your Testing. Addison-Wesley Pub. Co., Inc., USA, 2002. [14] Keller, R.K., Weber, R., & Berner. S, ―Observations and lessons learned from Automated testing‖. On Proceedings of the 27th International Conference on Software Engineering, 571-579. [15] B. Kitchenham, ―Procedures for Performing Systematic Reviews,‖ Tech report, Software Engineering Group Department of Computer Science, Keele University, 2004.

Page 93: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

91

[16] B. A. Kitchenham, S. Charters, ―Guidelines for performing Systematic Literature Reviews in Software Engineering,‖ EBSE Technical Report, Keele and Durham University, 2007. [17] K. Petersen, et.al,― Systematic Mapping Studies in Software Engineering‖, 12th International Conference on Evaluation and Assessment in Software Engineering (EASE), pages 71,80,2008. [18] N. Condori-Fernandez, at.al, ―A systematic mapping study on empirical evaluation of software requirements specifications techniques,‖ Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM, IEEE Computer Society, Washington, DC, ppt. 502-505, 2009. [19] Naseer Jan and Muhammad Ibrar,. ―Systematic Mapping of Value-based Software Engineering - A Systematic Review of Value-based Requirements Engineering ,‖Blekinge Institute of Technology, 2006. [20] J. Bailey, D. Budgen, M. Turner, B. Kitchenham, P. Brereton, and S. Linkman, ―Evidence relating to Object-Oriented software design: A survey,‖ ESEM, ppt. 482 – 484, 2007. [21] S. Mujtaba, K. Petersen, R. Feldt, and M. Mattsson, ―Software product line variability: A systematic mapping study,‖ Blekinge Institute of Technology, 2008. [22] D. Budgen, M. Turner, P. Brereton, and B. Kitchenham, ―Using Mapping Studies in Software Engineering,‖ In Proceedings of PPIG 2008, Lancaster University, ppt. 195– 204, 2008. [23] T. Dybå, T. Dingsøyr, ―Empirical Studies of Agile Software Development: a Systematic Review‖, Journal of Information and Software Technology, 2008. [24] Jalali, S.; Wohlin, C., ―Agile Practices in Global Software Engineering - A Systematic Map ―, Global Software Engineering (ICGSE), 5th IEEE International Conference, 2010. [25] K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson, ―Systematic mapping studies in software engineering,‖ In Proceedings of the annual research conference on Evaluation and Assessment in Software Engineering (Accepted), 2008. [26] W. Afzal, R. Torkar, and R. Feldt, ―A systematic review of search-based testing for non-functional system properties,‖ Inf. Softw. Technol. Vol. 51, no.6, Jun. 2009. [27] Jorge Biolchini et.al, ―Systematic Review in Software Engineering‖, Systems Engineering and computer science department, PESC, Rio de Janeiro, May 2005. [28] Sergey Uspenskiy, ―A survey and classification of software testing tools‖,

Lappeenranta University of Technology, 2010. [29] C.Kaner, ―Pitfalls and strategies in automated software testing‖, Volume: 30, Issue: 4, April 1997.

Page 94: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

92

[30] R. W. Rice, ―Surviving the top ten challenges of software test automation‖. Software Testing, Analysis & Review Conference (STAR) East, Software Quality Engineering, 2003. [31] C. Wohlin, Experimentation in software engineering: an introduction, Springer Netherlands, 2000. [32] R.K. Yin, Case study research: Design and methods, Sage Publications, Inc, 2009. [33] M. Fewster, "Common Mistakes in Test Automation‖, Grove Consultants. 2001. [34] Chi Keen, L., T. Y. Chen, et al., ―Automated test case generation for BDI Agents‖,

Autonomous Agents and Multi-Agent Systems Volume 2, Number 4, 1999. [35] Du Bousquet, L., F. Ouabdesselam, et al, ―Lutess: a specification-driven testing environment for synchronous software‖, ICSE '99 Proceedings of the 21st international conference on Software engineering, 1999. [36] Parissis, I., et al.,‖ Strategies for Automated Specification-Based Testing of Synchronous Software‖, in Conference on Automated software engineering, IEEE Computer Society, 2001. [37] M.V. Zelkowitz and D. Wallace, ―Experimental validation in software engineering,‖ Information and Software Technology, vol. 39, p. 735–743, 1997. [38] J. Fleiss, "Measuring nominal scale agreement among many rater‖, Psychological Bulletin 1971, vol. no. 76, no. 5, pp. 378-382, 1971. [39] Pressman, R. S. (2000), Software engineering: a practitioner's approach, McGraw-Hill, NY. [40] ISO/IEC (2001), ISO/IEC 9126-1, Software engineering - Product quality -Part 1:

Quality model.

[41] Nelly Condori-Fernandez1 et al., ―A Systematic Mapping Study on Empirical Evaluation of Software Requirements Specifications Techniques‖, Universidad Politecnica

de Valencia, University, 2009. [42] Zhu, H., Hall, P. A. V., & May, J. H. R, Software unit test coverage and adequacy. ACM Computing Surveys, 29(4), 365 – 427, 1997. [43] Tom Wissink, Carlos Amaro, "Successful Test Automation for Software Maintenance,"ICSM, pp.265-266, 22nd IEEE International Conference on Software Maintenance (ICSM'06), 2006. [44] Bousquet, L. d. and N. Zuanon, ―An Overview of Lutess: A Specification-Based Tool for Testing Synchronous Software‖, 14th IEEE International Conference on Automated

Software Engineering, 1999. [45] Malekzadeh, M. and R. N. Ainon , ―An automatic test case generator for testing safety-critical software systems‖, The 2nd International Conference on Computer and Automation Engineering (ICCAE), 2010.

Page 95: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

93

[46] Burnim, J. and K. Sen, ―Heuristics for Scalable Dynamic Test Generation‖, 23rd IEEE/ACM International Conference on Automated Software Engineering, IEEE Computer Society.p. 443-446, 2008. [47] Roy Patrick Tan and Stephen Edward, ―Evaluating Automated Unit Testing in Sulu‖,

1st International Conference on Verification, and Validation, 2008. [48] Coelho, R., E. Cirilo, et al.,―JAT: a test automation framework for multi-agent systems‖, ICSM, IEEE International Conference on Software Maintenance,2007. [49] Børge Haugset, GeirKjetil Hanssen, ―Automated Acceptance Testing: Literature Review and an Industrial Case Study‖, AGILE '08, Conference, 2008. [50] Lijun, S. and Z. Hong, ―Generating structurally complex test cases by data mutation: a case study of testing an automated modeling tool‖, Computer Journal, The Institution of Engineering and Technology. P.571-88, 2009. [51] Dan, H., Z. Lu, et al., ―Test-data generation guided by static defect detection‖, Journal

of Computer Science and Technology, Volume 24, Issue 2, March 2009. [52] Fecko, M. A. and C. M. Lott, ―Lessons learned from automating tests for operations support system‖, Software—Practice & Experience archive, Volume 32 ,Issue 15, December 2002. [53] Kansomkeat, S. and W. Rivepiboon (2003), Automated-generating test case using UML state chart diagrams. [54] Saglietti, F. and F. Pinte, ―Automated unit and integration testing for component-based software systems‖, Proceedings of the International Workshop on Security and Dependability for Resource Constrained Embedded Systems, ACM: Vienna, Austria, 2010. [55] Leitner, A., H. Ciupa, et al., ―Reconciling manual and automated testing: The AutoTest experience‖, HICSS '07 Proceedings of the 40th Annual Hawaii International Conference on System Sciences IEEE Computer Society Washington, 2007. [56] P.Pocatilu, ―Automated Software Testing Process‖, Economic Informatics Department, Academy of Economic Studies, Bucharest, 2002. [57] Alshraideh, M, ―A complete automation of unit testing for JavaScript‖, Program Journal

of computer science,Volume 4, Issue 12, 2008. [58] Bashir, M. F. and S. H. K. Banuri (2008), ―Automated model based software‖, 4th International Conference on Emerging Technologies, ICET 2008. [59] J. W. Cresswell, Research design qualitative and quantitative approaches, Sage Publications, 1994. [60] C. Wohlin et. al., Experimentation in Software Engineering: An Introduction, Springer, Dordrecht, the Netherlands: Kluwer Academic Publishers, 2000.

Page 96: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

94

[61] A. Avritzer and E. J. Weyuker, ―The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software‖, IEEE Transactions on Software Engineering, 21(9):705–716, 1995. [62] B. Baudryet. al., ―Genes and Bacteria for Automatic Test Cases Optimization in the .NET Environment‖, Proceedings of the 13th International Symposium on Software Reliability Engineering, pages 195–206, IEEE Computer Society, 2002. [63] P. Lutsky, ―Information Extraction from Documents for Automating Software Testing‖, Artificial Intelligence in Engineering, 2000. [64] A. J. Offut and S. Liu, ―Generating Test Data from SOFL Specifications‖, The Journal of Systems and Software, 49(1):49–62, 1999. [65] J.-C. Lin and P.-L. Yeh, ―Automatic Test Data Generation for Path Testing Using Gas‖, Information Sciences: An International Journal, 131(1–4):47–64, 2001. [66] SWEBOK, IEEE Guide to Software Engineering Body of Knowledge, 2004. [67] C. Persson and N. Yilmaztürk, "Establishment of Automated Regression Testing at ABB: Industrial Experience Report on ‗Avoiding the Pitfalls‘," in 19th International

Conference on Automated Software Engineering (ASE‘04): IEEE Computer Society, 2004. [68] Lisa M. Given, SAGE Encyclopedia of Qualitative Research Methods, USA: SAGE, 2008. [69] T. Dybå, T. Dingsøyr, Geir K. Hanssen, "Applying Systematic Reviews to Diverse Study Types: An Experience Report", First International Symposium on Empirical Software Engineering and Measurement,

Page 97: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

95

APPENDIX A Kappa analysis

―The kappa method was used to calculate nominal scale agreement between a fixed pairs of

researchers [37].‖ It is a statistical measure for evaluating the agreement level between a fixed numbers of researchers where each of the categories or items or subjects is rated on a nominal scale by the same number of researchers/raters. Kappa statistical method will help the researchers to calculate the reliability of rating [37]. The Kappa k, can be stated as, K= (p-pe)/ (1-pe) The quantity 1-pe, calculate the scale of agreement achievable over and above what would be expected by chance [37]. Similarly, p-pe calculates the scale of agreement attained in excess of chance [37]. The value of k denotes the agreement level between the researchers/raters. If value of k=1, this indicates that the researchers are in full agreement. But if k <= 0, then it shows that there is no agreement between the researchers. The following table by Koch and Landis [38] shows the information regarding different k values.

Table: Koch and Landis Kappa Values

Kappa Statistic Strength of Agreement

< 0.00 Poor

0.00-0.20 Slight

0.21-0.40 Fair

0.41-0.60 Moderate

0.61-0.80 Substantial

0.81-1.00 Almost Perfect

Table: Agreement with kappa for systematic mapping studies Researcher 2 Total

INCLUDE EXCLUDE

Researcher 1 INCLUDE 8 4 12

EXCLUDE 3 35 38

Total 11 39 50

The above table shows the agreement between the researchers .The kappa K value for our analysis in our case is explained below

Page 98: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

96

N denotes the total number of selected research papers, which is 50 n denotes the number of researchers/raters, which is 2 k denotes the number of categories The k value we got after calculation= 0.605 (substantial agreement between the researchers)

Table: Kappa agreement for SLR studies

Researcher 2 Total

INCLUDE EXCLUDE

Researcher 1 INCLUDE 4 2 6

EXCLUDE 1 18 19

Total 5 20 25

The k value we got after calculation= 0.651 (substantial agreement between the researchers)

APPENDIX B Systematic mapping studies

REFERENCE NUMBER

ARTICLE NAME

SM 1 Takahashi,J., An automated oracle for verifying GUI objects. SIGSOFT Softw. Eng. Notes, 2001. 26(4): p. 83-88

SM 2 Hana.S, et al., Automated testing of stochastic systems: a statistically grounded approach, in Proceedings of the 2006 international symposium on Software testing and analysis. 2006, ACM: Portland, Maine, USA. p. 215-224.

SM 3 Al Dallal, J. and P. Sorenson. System testing for object-oriented frameworks using hook technology. in Proceedings ASE 2002. 17th IEEE International Conference on Automated Software Engineering, 23-27 Sept. 2002. 2002. Los Alamitos, CA, USA: IEEE Comput. Soc.

SM 4 Al Shaar, H. and R. Haraty, Modeling and automated blackbox regression testing of Web applications. Journal of Applied Information Technology, 2005. 4(Copyright 2009, The Institution of Engineering and Technology): p. 1182-98.

SM 5 Ali, S., et al., A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation. Software Engineering, IEEE Transactions on, 2010. 36(6): p. 742-762.

SM 6 Alonso, D., et al., Automatic Ada Code Generation Using a Model-

Page 99: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

97

Driven Engineering Approach, in Reliable Software Technologies – Ada Europe 2007, N. Abdennahder and F. Kordon, Editors. 2007, Springer Berlin / Heidelberg. p. 168-179.

SM 7 Andrews, J.H., F.C.H. Li, and T. Menzies, Nighthawk: a two-level genetic-random unit test data generator, in Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 2007, ACM: Atlanta, Georgia, USA. p. 144-153.

SM 8 Arcuri, A., On the automation of fixing software bugs, in Companion of the 30th international conference on Software engineering. 2008, ACM: Leipzig, Germany. p. 1003-1006.

SM 9 Artho, C. and A. Biere, Advanced unit testing: how to scale up a unit test framework, in Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 92-98.

SM 10 Auguston, M., J.B. Michael, and M.-T. Shing, Environment behavior models for scenario generation and testing automation. SIGSOFT Softw. Eng. Notes, 2005. 30(4): p. 1-6.

SM 11 Awedikian, Z., K. Ayari, and G. Antoniol, MC/DC automatic test input data generation, in Proceedings of the 11th Annual conference on Genetic and evolutionary computation. 2009, ACM: Montreal, Qubec, Canada. p. 1657-1664.

SM 12 Baker, P. and C. Jervis. Testing UML2.0 models using TTCN-3 and the UML2.0 testing profile. in SDL 2007: Design for Dependable Systems. 13th International SDL Forum, 18-21 Sept. 2007. 2007. Berlin, Germany: Springer-Verlag.

SM 13 Berndt, D.J. and A. Watkins, Investigating the performance of genetic algorithm-based software test case generation, in Proceedings of the Eighth IEEE international conference on High assurance systems engineering. 2004, IEEE Computer Society: Tampa, Florida. p. 261-262.

SM 14 Bierbaum, A., P. Hartling, and C. Cruz-Neira, Automated testing of virtual reality application interfaces, in Proceedings of the workshop on Virtual environments 2003. 2003, ACM: Zurich, Switzerland. p. 107-114.

SM 15 Tom Wissink, Carlos Amaro, "Successful Test Automation for Software Maintenance," icsm, pp.265-266, 22nd IEEE International Conference on Software Maintenance (ICSM'06), 2006.

Page 100: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

98

SM 16 Botella, B., et al. Automating structural testing of c programs: Experience with pathcrawler. 2009.

SM 17 Brooks, P.A. and A.M. Memon, Automated gui testing guided by usage profiles, in Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 2007, ACM: Atlanta, Georgia, USA. p. 333-342.

SM 18 Burnim, J. and K. Sen, Heuristics for Scalable Dynamic Test Generation, in Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering. 2008, IEEE Computer Society. p. 443-446.

SM 19 Cai, L.-Z., et al., Test automation for kernel code and disk arrays with virtual devices, in Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 2007, ACM: Atlanta, Georgia, USA. p. 505-508.

SM 20 Cavalli, A., et al. Application of two test generation tools to an industrial case study. in 18th IFIP TC 6/WG 6.1 International Conference on Testing of Communicating Systems, TestCom 2006, May 16, 2006 - May 18, 2006. 2006. New York, NY, United states: Springer Verlag.

SM 21 Chengying, M. and L. Yansheng. CppTest: a prototype tool for testing C/C++ programs. in 2007 2nd International Conference on Availability, Reliability and Security, 10-13 April 2007. 2007. Los Alamitos, CA, USA: IEEE Comput. Soc.

SM 22 Choi, K.C. and G.H. Lee. Automatic test approach of web application for security (AutoInspect). in ICCSA 2006: International Conference on Computational Science and Its Applications, May 8, 2006 - May 11, 2006. 2006. Glasgow, United kingdom: Springer Verlag.

SM 23 Chorng-Shiuh, K., et al. Supporting Tool for Embedded Software Testing. in 2010 10th International Conference on Quality Software (QSIC 2010), 14-15 July 2010. 2010. Los Alamitos, CA, USA: IEEE Computer Society.

SM 24 Kaner. C, J. Bach, B. Pettichord, Lessons Learned in Software Testing: A Context-Driven Approach, Wiley Computer Publishing, 2002

SM 25 Ciortea, L., et al., Cloud9: a software testing service. SIGOPS Oper. Syst. Rev., 2010. 43(4): p. 5-10.

SM 26 Ciupa, I., et al., ARTOO: adaptive random testing for object-oriented software, in Proceedings of the 30th international conference on Software engineering. 2008, ACM: Leipzig, Germany. p. 71-80.

SM 27 Coons, K.E., S. Burckhardt, and M. Musuvathi, GAMBIT: effective unit testing for concurrency libraries, in Proceedings of the 15th ACM SIGPLAN symposium on Principles and practice of parallel

Page 101: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

99

programming. 2010, ACM: Bangalore, India. p. 15-24. SM 28 Coppit, D. and J. Lian, yagg: an easy-to-use generator for

structured test inputs, in Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering. 2005, ACM: Long Beach, CA, USA. p. 356-359.

SM 29 Daniel, B., et al., ReAssert: Suggesting Repairs for Broken Unit Tests, in Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering. 2009, IEEE Computer Society. p. 433-444.

SM 30 Davidsson, M., et al. GERT: an empirical reliability estimation and testing feedback tool. in 15th International Symposium on Software Reliability Engineering, 2-5 Nov. 2004. 2004. Los Alamitos, CA, USA: IEEE Comput. Soc.

SM 31 Bereza-Jarocinski. B, Automated Testing in Daily Build, Swedish Engineering Industries, 2000.

SM 32 M. Fewster, "Common Mistakes in Test Automation, “Grove Consultants 2001.

SM 33 Edwards, A., S. Tucker, and B. Demsky, AFID: An automated approach to collecting software faults. Automated Software Engineering, 2010. 17(Compendex): p. 347-372.

SM 34 Eunkyoung, J., et al. Automated Test Coverage Measurement for Reactor Protection System Software Implemented in Function Block Diagram. in Computer Safety, Reliability, and Security. 29th International Conference, SAFECOMP 2010, 14-17 Sept. 2010. 2010. Berlin, Germany: Springer Verlag.

SM 35 K.Karhu, et.al, “Empirical Observations on Software Testing Automation”, Lappeenranta University of Technology, International Conference on Software Testing Verification and Validation, 2009.

SM 36 Giroux, O. and R. Martin P, Detecting increases in feature coupling using regression tests, in Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering. 2006, ACM: Portland, Oregon, USA. p. 163-174.

SM 37 Gutierrez, J., et al., Derivation of Test Objectives Automatically, in Advances in Information Systems Development, W. Wojtkowski, et al., Editors. 2007, Springer US. p. 435-446.

SM 38 C. Persson and N. Yilmaztürk, "Establishment of Automated Regression Testing at ABB: Industrial Experience Report on ‘Avoiding the Pitfalls’," in 19th International Conference on Automated Software Engineering (ASE’04): IEEE Computer Society, 2004.

SM 39 Hu, M. and J. Wang. Application of Automated Testing Tool in GIS Modeling in Software Engineering, 2009. WCSE '09. WRI World Congress on. 2009.

SM 40 Huima, A. Implementing Conformiq Qtronic. in Testing of Software and Communicating Systems. 19th IFIP TC6/WG6.1 International

Page 102: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

100

Conference, TestCom 2007, 26-29 June 2007. 2007. Berlin, Germany: Springer-Verlag.

SM 41 Ihantola, P., Test data generation for programming exercises with symbolic execution in Java Path Finder, in Proceedings of the 6th Baltic Sea conference on computing education research: Koli Calling 2006. 2006, ACM: Uppsala, Sweden. p. 87-94.

SM 42 Jiang, B., X. Long, and X. Gao. MobileTest: A tool supporting automatic black box test for software on smart mobile devices. in 29th International Conference on Software Engineering, ICSE'07 - 2nd International Workshop on Automation of Software Test, AST'07, May 20, 2007 - May 26, 2007. 2007. Minneapolis, MN, United states: Inst. of Elec. and Elec. Eng. Computer Society.

SM 43 Jiang, G. and S. Jiang. A quick testing model of web performance based on testing flow and its application. in 2009 6th Web Information Systems and Applications Conference, WISA 2009, September 18, 2009 - September 20, 2009. 2009. Xuzhou, Jiangsu, China: IEEE Computer Society.

SM 44 Koopman, P. and R. Plasmeijer, Fully Automatic Testing with Functions as Specifications, in Central European Functional Programming School, Z. Horváth, Editor. 2006, Springer Berlin / Heidelberg. p. 35-61.

SM 45 Lamari, M., Towards an automated test generation for the verification of model transformations, in Proceedings of the 2007 ACM symposium on Applied computing. 2007, ACM: Seoul, Korea. p. 998-1005.

SM 46 Lammermann, F. and S. Wappler, Benefits of software measures for evolutionary white-box testing, in Proceedings of the 2005 conference on Genetic and evolutionary computation. 2005, ACM: Washington DC, USA. p. 1083-1084.

SM 47 Lanubile, F. and T. Mallardo, Inspecting automated test code: a preliminary study, in Proceedings of the 8th international conference on Agile processes in software engineering and extreme programming. 2007, Springer-Verlag: Como, Italy. p. 115-122.

SM 48 Lappalainen, V., et al., ComTest: a tool to impart TDD and unit testing to introductory level programming, in Proceedings of the fifteenth annual conference on Innovation and technology in computer science education. 2010, ACM: Bilkent, Ankara, Turkey. p. 63-67

SM 49 Leitner, A., et al., Contract driven development = test driven development - writing test cases, in Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering. 2007, ACM: Dubrovnik, Croatia. p. 425-434.

SM 50 Li, D., et al. The Research on Automatic Generation of Testing Data for Web Service. in 2nd International Conference on Information

Page 103: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

101

Science and Engineering (ICISE 2010), 4-6 Dec. 2010. 2010. Piscataway, NJ, USA: IEEE.

SM 51 Li, N., et al. Reggae: Automated test generation for programs using complex regular expressions. 2009.

SM 52 Lijun, S. and Z. Hong, Generating structurally complex test cases by data mutation: a case study of testing an automated modelling tool. Computer Journal, 2009. 52(Copyright 2009, The Institution of Engineering and Technology): p. 571-88.

SM 53 Lindsay, W., et al. Automatic test programs generation driven by internal performance counters. in Proceedings. 5th International Workshop on Microprocessor Test and Verification. Common Challenges and Solutions, 9-10 Sept. 2004. 2004. Los Alamitos, CA, USA: IEEE Comput. Soc.

SM 54 Liu, H. and H.B.K. Tan, Automated verification and test case generation for input validation, in Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 29-35.

SM 55 Madeyski, L. and N. Radyk, Judy - A mutation testing tool for Java. IET Software, 2010. 4(Compendex): p. 32-42.

SM 56 McGill, M., R. Stirewalt, and L. Dillon, Automated Test Input Generation for Software That Consumes ORM Models, in On the Move to Meaningful Internet Systems: OTM 2009 Workshops, R. Meersman, P. Herrero, and T. Dillon, Editors. 2009, Springer Berlin / Heidelberg. p. 704-713.

SM 57 Memon, A.M., Automatically repairing event sequence-based GUI test suites for regression testing. ACM Trans. Softw. Eng. Methodol., 2008. 18(2): p. 1-36.

SM 58 Naslavsky, L. and D.J. Richardson, Using traceability to support model-based regression testing, in Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering. 2007, ACM: Atlanta, Georgia, USA. p. 567-570.

SM 59 Nebut, C., et al. Automated requirements-based generation of test cases for product families. in Proceedings 18th IEEE International Conference on Automated Software Engineering, 6-10 Oct. 2003. 2003. Los Alamitos, CA, USA: IEEE Comput. Soc.

SM 60 Nunes, P., S. Hanazumi, and A. de Melo, OConGraX – Automatically Generating Data-Flow Test Cases for Fault-Tolerant Systems, in Testing of Software and Communication Systems, M. Núñez, P. Baker, and M. Merayo, Editors. 2009, Springer Berlin / Heidelberg. p. 229-234.

SM 61 Oddos, Y., et al., MYGEN: automata-based on-line test generator for assertion-based verification, in Proceedings of the 19th ACM Great Lakes symposium on VLSI. 2009, ACM: Boston Area, MA, USA. p. 75-80.

SM 62 Pacheco, C. and M. Ernst, Eclat: Automatic Generation and Classification of Test Inputs, in ECOOP 2005 - Object-Oriented

Page 104: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

102

Programming, A. Black, Editor. 2005, Springer Berlin / Heidelberg. p. 734-734.

SM 63 Parissis, I., et al., Strategies for Automated Specification-Based Testing of Synchronous Software, in Proceedings of the 16th IEEE international conference on Automated software engineering. 2001, IEEE Computer Society. p. 364.

SM 64 Pasternak, B., S. Tyszberowicz, and A. Yehudai, GenUTest: A Unit Test and Mock Aspect Generation Tool, in Hardware and Software: Verification and Testing, K. Yorav, Editor. 2008, Springer Berlin / Heidelberg. p. 252-266.

SM 65 Dalal, S. R., A. Jain, et al. (1999). Model-based testing in practice.

SM 66 Pinte, F., N. Oster, and F. Saglietti, Techniques and tools for the automatic generation of optimal test data at code, model and interface level, in Companion of the 30th international conference on Software engineering. 2008, ACM: Leipzig, Germany. p. 927-928.

SM 67 Qiuming, T., et al. An Automatic Testing Approach for Compiler Based on Metamorphic Testing Technique. in 2010 17th Asia Pacific Software Engineering Conference (APSEC 2010). Software for Improving Quality of Life, 30 Nov.-3 Dec. 2010.Los Alamitos, CA, USA: IEEE Computer Society.

SM 68 Sanchez, E., et al., Automatic generation of test sets for SBST of microprocessor IP cores, in Proceedings of the 18th annual symposium on Integrated circuits and system design. 2005, ACM: Florianol polis, Brazil. p. 74-79.

SM 69 Santhanam, U., Automating software module testing for FAA certification, in Proceedings of the 2001 annual ACM SIGAda international conference on Ada. 2001, ACM: Bloomington, MN. p. 31-38.

SM 70 Sze, S.K.S. and M.R. Lyu. ATACOBOL-a COBOL test coverage analysis tool and its applications. in Software Reliability Engineering, 2000. ISSRE 2000. Proceedings. 11th International Symposium on. 2000.

SM 71 Taneja, K., Y. Zhang, and T. Xie, MODA: automated test generation for database applications via mock objects, in Proceedings of the IEEE/ACM international conference on Automated software engineering. 2010, ACM: Antwerp, Belgium. p. 289-292.

SM 72 Zhenya Huang,Lisa carter, “Automated solutions: Improving the effectiveness of software testing”,2003.

SM 73 Vergilio, S., J. Maldonado, and M. Jino, Infeasible paths in the context of data flow based testing criteria: Identification, classification and prediction. Journal of the Brazilian Computer Society, 2006. 12(1): p. 73-88.

SM 74 Wang, Z., et al. TTCN-3 based conformance testing of mobile broadcast business management system in 3G networks. in 21st

Page 105: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

103

IFIP International Conference on Testing of Communicating Systems, TESTCOM 2009 and 9th International Workshop on Formal Approaches to Testing of Software, FATES 2009, November 2, 2009 - November 4, 2009. 2009. Eindhoven, Netherlands: Springer Verlag.

SM 75 Wissink, T. and C. Amaro. Successful test automation for software maintenance. in ICSM 2006: 22nd IEEE International Conference on Software Maintenance, September 24, 2006 - September 27, 2006. 2006. Philadelphia, PA, United states: IEEE Computer Society.

SM 76 Xie, Q. and A.M. Memon, Using a pilot study to derive a GUI model for automated testing. ACM Trans. Softw. Eng. Methodol., 2008. 18(2): p. 1-35.

SM 77 Burdonov, I., A. Kossatchev, et al. (1999), KVEST: automated generation of test suites from formal specifications.

SM 78 Koch, B., J. Grabowski, et al. (1999), Autolink-a tool for automatic test generation from SDL specifications.

SM 79 Chi Keen, L., T. Y. Chen, et al. (1999), Automated test case generation for BDI agents.

SM 80 du Bousquet, L., F. Ouabdesselam, et al. 1999, Lutess: a specification-driven testing environment for synchronous software

SM 81 Aichernig, B. K., H. Brandl, et al, Aichernig, B. K., H. Brandl, et al., 2011.

SM 82 Al Dallal, J. and P. Sorenson, System testing for object-oriented frameworks using hook technology.

SM 83 Al Dallal, J. and P. Sorenson, software assets of framework-based product families during application engineering stage,2008

SM 84 Albert, E., M. G\, et al. (2010). PET: a partial evaluation-based test case generation tool for Java bytecode.

SM 85 Alshraideh, M. (2008). A complete automation of unit testing for JavaScript program

SM 86 Andrews, J. H., S. Haldar, et al. (2006), Tool support for randomized unit testing.

SM 87 S. wang 2008, Comparison of Unit-Level Automated Test Generation Tools

SM 88 Roy Patrick Tan and Stephen Edward 2008, Evaluating Automated Unit Testing in Sulu.

SM 89 Arcuri, A. and Y. Xin (2007), On Test Data Generation of Object-Oriented Software.

SM 90 Arunkumar, B. and N. K. Anand (2009), Development of an automated testing software for real time systems.

SM 91 Auguston, M., J. B. Michael, et al. (2005), Environment behavior models for scenario generation and testing automation

SM 92 Baker, P., D. Evans, et al. (2006), TRex - the refactoring and metrics tool for TTCN-3 test specifications.

SM 93 Baker, P. and C. Jervis (2007). Early UML Model Testing using

Page 106: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

104

TTCN-3 and the UML Testing Profile. SM 94 Baker, P. and C. Jervis (2007). Testing UML2.0 models using TTCN-

3 and the UML2.0 testing profile. SM 95 Bashir, M. F. and S. H. K. Banuri (2008), Automated model based

software Test Data Generation System SM 96 Belinfante, A., L. Frantzen, et al. (2005), Tools for test case

generation SM 97 Berner, S., R. Weber, et al. (2005), Observations and lessons

learned from automated testing. SM 99 Bertolino, A., G. D. Angelis, et al. (2008). VCR: Virtual Capture and

Replay for Performance Testing. SM 100 Bin, W., Z. Chunhua, et al. (2010), MDA-based automated

generation. SM 101 Bird, C. and A. Sermon (2001), "An XML-based approach to

automated software testing. SM 102 Bokil, P., P. Darke, et al. (2009), Automatic test data generation for

C programs SM 103 Bucur, S., V. Ureche, et al. (2011), Parallel symbolic execution for

automated real-world software testing. SM 104 Catelani, M., L. Ciani, et al. (2008), A Novel Approach To

Automated Testing To Increase Software Reliability SM 105 Chakrabarti, A. and P. Godefroid (2006), Software partitioning for

effective automated unit testing SM 106 Chang, J.-R. and C.-Y. Huang (2007), A study of enhanced MC/DC

coverage criterion for software testing. SM 107 Changhyun, B., J. Joongsoon, et al. (2007), A case study of black-

box testing for embedded software using test automation tool. SM 108 Cheon, Y. and G. Leavens (2006), A Simple and Practical Approach

to Unit Testing: The JML and JUnit Way. SM 109 Chetali, B. and Q.-H. Nguyen (2009), An automated testing

experiment for layered embedded C code. SM 110 Chevalley, P. and P. Thevenod-Fosse (2001), Automated generation

of statistical test cases from UML state diagrams SM 111 Cho, Y. and J. Choi (2008). An Embedded Software Testing Tool

Supporting Multi-paradigm Views. SM 112 Coelho, R., E. Cirilo, et al. (2007). JAT: a test automation

framework for multi-agent systems. SM 113 Csallner, C. and Y. Smaragdakis (2004), JCrasher: an automatic

robustness tester for Java. SM 114 Cunha, M., A. C. R. Paiva, et al. (2010), PETTool: a pattern-based

GUI testing tool. SM 115 Dallal, J. A. (2009), Automation of object-oriented framework

application testing. SM 116 Damm, L.-O., L. Lundberg, et al. (2005), Introducing test

automation and test-driven development: An experience report. SM 117 Dan, H., Z. Lu, et al. (2009). Test-data generation guided by static

Page 107: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

105

defect detection. SM 118 Daniel, B. and M. Boshernitsan (2008). Predicting Effectiveness of

Automatic Testing Tools. SM 119 De Matos, E. and T. Sousa (2010). From formal requirements to

automated web testing and prototyping. SM 120 Degrave, F. (2008). Development of an Automatic Testing

Environment for Mercury. SM 121 Dias Neto, A. C., R. Subramanyan, et al. (2007), A survey on model-

based testing approaches. SM 122 Dillon, E. and C. Meudec (2004), Automatic test data generation

from embedded C code. SM 123 Dunning, S. and D. Sawyer (2011), A little language for rapidly

constructing automated performance tests. SM 124 D. Mosley and B. Posey. Just Enough Software Test Automation.

Prentice Hall PTR, 2002. SM 125 Eddins, S. L (2009), Automated Software Testing for MATLAB SM 126 Fecko, M. A. and C. M. Lott (2002), Lessons learned from

automating tests for an operations support system. SM 127 Ferrari, F. C., E. Y. Nakagawa, et al. (2010), Automating the

mutation testing of aspect-oriented Java programs. SM 128 Galler, S. J., C. Zehentner, et al. (2010), AIana: an AI planning

system for test data generation. SM 129 Gladisch, C., S. Tyszberowicz, et al. (2010), Generating Regression

Unit Tests Using a Combination of Verification and Capture. SM 130 Godefroid, P., N. Klarlund, et al. (2005), DART: directed automated

random testing. SM 131 Grechanik, M., X. Qing, et al. (2009), Experimental assessment of

manual versus tool-based maintenance of GUI-directed test scripts.

SM 132 Grechanik, M., X. Qing, et al. (2009), Maintaining and evolving GUI-directed test scripts.

SM 133 Gross, H., P. M. Kruse, et al. (2009), Evolutionary white-box software test with the EvoTest Framework, a progress report.

SM 134 Guangzhu, J. and J. Shujuan (2009), A quick testing model of web performance based on testing flow and its application.

SM 135 Guelfi, N. and B. Ries (2008), Selection, evaluation and generation of test cases in an industrial setting: a process and a tool.

SM 136 Guiotto, A., B. Acquaroli, et al. (2003), MaTeLo: Automated Testing Suite for Software Validation.

SM 137 Gittens, M., et al., Focused iterative testing: a test automation case study, in Proceedings of the 1st international workshop on Testing database systems. 2008, ACM: Vancouver, British Columbia, Canada. p. 1-6.

SM 138 Hartman, A., M. Katara, et al. (2007), Domain specific approaches to software test automation.

SM 139 Horwitz, S. (2002), Tool Support for Improving Test Coverage.

Page 108: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

106

SM 140 Huiqun, Z., S. Jing, et al. (2009), Study of Methodology of Testing SM 141 Hwang, I., A. R. Cavalli, et al. (2011), Applying formal methods to

PCEP: An industrial case study from modelling to test generation. SM 142 Im, K., T. Im, et al. (2008). Automating test case definition using a

domain specific language. SM 143 Janssen, T., R. Abreu, et al. (2009), Zoltar: A Toolset for Automatic

Fault Localization. SM 144 Javed, A. Z., P. A. Strooper, et al. (2007), Automated generation of

test cases using model-driven architecture SM 145 Jaygarl, H., S. Kim, et al. (2010), OCAT: Object Capture based

Automated Testing. SM 146 Joshi, S. and A. Orso (2007). SCARPE: A technique and tool for

selective capture and replay of program executions. SM 147 Kansomkeat, S. and W. Rivepiboon (2003), Automated-generating

test case using UML state chart diagrams. SM 148 Lakhotia, K., P. McMinn, et al. (2009), Automated test data

generation for coverage: Haven't we solved this problem yet? SM 149 Lakhotia, K., P. McMinn, et al. (2010). An empirical investigation

into branch coverage for C programs using CUTE and AUSTIN. SM 150 Last, M., M. Friedman, et al. (2003). The data mining approach to

automated software testing. SM 151 Leitner, A., H. Ciupa, et al. (2007). Reconciling manual and

automated testing: The Auto Test experience. SM 152 Leow, W. K., S. C. Khoo, et al. (2004), Automated Generation of

Test Programs from Closed Specifications of Classes and Test Cases.

SM 153 Liu, C. (2000). Platform-independent and tool-neutral test descriptions for automated software testing.

SM 154 Lizhe, C. and L. Qiang (2010), Automated test case generation from use case: A model based approach.

SM 155 Machado, P. and A. Sampaio (2007). Automatic Test-Case Generation.

SM 156 Malekzadeh, M. and R. N. Ainon (2010, An automatic test case generator for testing safety-critical software systems.

SM 157 McGee, P. and C. Kaner (2004), Experiments with high volume test automation.

SM 158 Mustafa, K. M., R. E. Al-Qutaish, et al. (2009), Classification of Software Testing Tools Based on the Software Testing Methods.

SM 159 Nagowah, L. and P. Roopnah (2010), AsT-A simple automated system testing tool.

SM 160 Netkow, M. H. and D. Brylow (2010), Xest: an automated framework for regression testing of embedded software.

SM 161 Oriat, C. (2005). Jartege: a tool for random generation of unit tests for Java classes.

SM 162 Pedro, Beatre, ignacio 233, et al. (2009), Automated model-based

Page 109: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

107

testing using the UML testing profile and QVT. SM 163 Paiva, A., J. Faria, et al. (2005), A Model-to-Implementation

Mapping Tool for Automated Model-Based GUI Testing. SM 164 Qian, Y., J. J. Li, et al. (2009), A survey of coverage-based testing

tools. SM 165 Ramler, R. and K. Wolfmaier (2006) ,Economic perspectives in test

automation: balancing automated and manual testing with opportunity cost.

SM 166 Beyer, D., A. J. Chlipala, et al. (2004), Generating Tests from Counter examples.

SM 167 E. Dustin, J. Rashka and J. Paul, “Automated Software Testing: Introduction, Management and Performance”. Addison-Wesley Pub. Co., Inc., Boston, MA, USA, 1999.

SM 168 M. Fewster and D. Graham, “Software Test Automation: Effective Use of Test Execution Tools”. 1999.

SM 169 B. Pettichord, “Seven Steps to Test Automation Success”, STAR West, San Jose, NV, USA, November 1999.

SM 170 Bousquet, L. d. and N. Zuanon (1999). An Overview of Lutess: A Specification-Based Tool for Testing Synchronous Software.

SM 171 Krustev, D. N. (1999), Software Test Generation Using Refinement Types.

SM 172 Eugenia Diaz, Javier Tuya, and Raquel Blanco. Automated Software Testing Using a Metaheuristic Technique Based on Tabu Search. In Proceedings of the 18th IEEE International Conference on Automated Software Engineering03; 310-313/03, 2003.

SM 173 Nguyen Tran Sy, Deville Y. Automatic test data generation for programs with integer and float variables. In proceedings of the 16th International Conference on Automated Software Engineering16; 3-21/01; 2001.

SM 174 Okika, J.C., et al., Developing a TTCN-3 test harness for legacy software, in Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 104-110.

SM 175 P.Pocatilu, “Automated Software Testing Process Economic Informatics Department”, Academy of Economic Studies, Bucharest, 2002.

SM 176 Stephen H. Edwards. A framework for practical, automated black-box testing of component-based software. Software Testing, Verification and Reliability; 11:97–111, 2001.

SM 177 Burnstein, 2003, “Practical Software Testing”, Springer International Edition, 2003.

SM 178 Neukirchen, H., et al., Quality assurance for TTCN-3 test specifications. Software Testing, Verification and Reliability, 2008. 18(Copyright 2008, The Institution of Engineering and Technology): p. 71-97.

SM 179 Neukirchen, H., B. Zeiss, and J. Grabowski, An approach to quality

Page 110: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

108

engineering of TTCN-3 test specifications. International Journal on Software Tools for Technology Transfer, 2008. 10(Compendex): p. 309-326.

SM 180 Lever, S., Eclipse Platform Integration of Jester – The JUnit Test Tester, in Extreme Programming and Agile Processes in Software Engineering, H. Baumeister, M. Marchesi, and M. Holcombe, Editors. 2005, Springer Berlin / Heidelberg. p. 1277-1281.

SM 181 Zoffmann, G., et al., A Classification Scheme for Software Verification Tools with Regard to RTCA/DO-178B, in Computer Safety, Reliability and Security, U. Voges, Editor. 2001, Springer Berlin / Heidelberg. p. 166-175.

SM 182 Zhiliang, W., et al. TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks. inTesting of Software and Communication Systems. 21st IFIP WG 6.1 International Conference, TESTCOM 2009. 9th International Workshop, FATES 2009, 2-4 Nov. 2009. 2009. Berlin, Germany: Springer Verlag.

SM 183 Zeiss, B., et al. Refactoring and metrics for TTCN-3 test suites. in5th International Workshop on System Analysis and Modeling: Language Profiles, SAM 2006, May 31, 2006 - June 2, 2006. 2006. Kaiserslautern, Germany: Springer Verlag.

SM 184 Yuan, X. and A.M. Memon, Using GUI Run-Time State as Feedback to Generate Test Cases, in Proceedings of the 29th international conference on Software Engineering. 2007, IEEE Computer Society. p. 396-405.

SM 185 Yuan, H. and T. Xie, Substra: a framework for automatic generation of integration tests, in Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 64-70.

SM 186 Yang, Q., J.J. Li, and D. Weiss. A survey of coverage based testing tools. in1st International Workshop on Automation of Software Test, AST'06, Co-located with the 28th International Conference on Software Engineering, ICSE 2009, May 20, 2006 - May 28, 2006. 2006. Shanghai, China: IEEE Computer Society.

SM 187 Xuezhi, X. and J. Fan. GUI Test Case Definition with TTCN-3. inComputational Intelligence and Software Engineering, 2009. CiSE 2009. International Conference on. 2009.

SM 188 Xie, T. and J. Zhao, A framework and tool supports for generating test inputs of AspectJ programs, in Proceedings of the 5th international conference on Aspect-oriented software development. 2006, ACM: Bonn, Germany. p. 190-201.

SM 189 Xie, T. and D. Notkin, Tool-assisted unit-test generation and selection based on operational abstractions. Automated Software Engineering, 2006. 13(3): p. 345-371.

SM 190 Wu, H. and J. Gray, Automated generation of testing tools for domain-specific languages, in Proceedings of the 20th IEEE/ACM

Page 111: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

109

international Conference on Automated software engineering. 2005, ACM: Long Beach, CA, USA. p. 436-439.

SM 191 Wu, H., Grammar-driven generation of domain-specific language testing tools, in Companion to the 20th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications. 2005, ACM: San Diego, CA, USA. p. 210-211.

SM 192 Wloka, J., B.G. Ryder, and F. Tip, JUnitMX - A change-aware unit testing tool, in Proceedings of the 31st International Conference on Software Engineering. 2009, IEEE Computer Society. p. 567-570.

SM 193 Wiederseiner, C., et al., An Open-Source Tool for Automated Generation of Black-Box xUnit Test Code and Its Industrial Evaluation, in Testing – Practice and Research Techniques, L. Bottaci and G. Fraser, Editors. 2010, Springer Berlin / Heidelberg. p. 118-128.

SM 194 Wick, M., D. Stevenson, and P. Wagner, Using testing and JUnit across the curriculum. SIGCSE Bull., 2005. 37(1): p. 236-240.

SM 195 Vieira, M., et al., Automation of GUI testing using a model-driven approach, Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 9-14.

SM 196 Thomsen, C. and T. Pedersen, ETLDiff: A Semi-automatic Framework for Regression Test of ETL Software, in Data Warehousing and Knowledge Discovery, A. Tjoa and J. Trujillo, Editors. 2006, Springer Berlin / Heidelberg. p. 1-12.

SM 197 Janssen, T., R. Abreu, and A.J.C.v. Gemund, Zoltar: A Toolset for Automatic Fault Localization, in Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering. 2009, IEEE Computer Society. p. 662-664.

SM 198 Taneja, K. and T. Xie, DiffGen: Automated Regression Unit-Test Generation, in Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering. 2008, IEEE Computer Society. p. 407-410.

SM 199 Von Mayrhauser, A. and Z. Ning (1999). Automated regression testing using DBT and Sleuth.

SM 200 Strasser, A., H. Mayr, and T. Naderhirn, Harmonizing the test support for object-oriented legacy systems using state-of-the-art test tools, in Proceedings of the 1st Workshop on Testing Object-Oriented Systems. 2010, ACM: Maribor, Slovenia. p. 1-7.

SM 201 Stotts, D., M. Lindsey, and A. Antley, An Informal Formal Method for Systematic JUnit Test Case Generation, in Extreme Programming and Agile Methods — XP/Agile Universe 2002, D. Wells and L. Williams, Editors. 2002, Springer Berlin / Heidelberg. p. 365-385.

SM 202 Sprenkle, S., et al., Automated replay and failure detection for web applications, in Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering. 2005, ACM: Long

Page 112: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

110

Beach, CA, USA. p. 253-262. SM 203 Sommerlad, P. and E. Graf, CUTE: C++ unit testing easier, in

Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion. 2007, ACM: Montreal, Quebec, Canada. p. 783-784.

SM 204 Seljimi, B. and I. Parissis, Automatic generation of test data generators for synchronous programs: Lutess V2, in Workshop on Domain specific approaches to software test automation: in conjunction with the 6th ESEC/FSE joint meeting. 2007, ACM: Dubrovnik, Croatia. p. 8-12.

SM 205 Scott, H. and C. Wohlin, Capture-recapture in software unit testing: a case study, in Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement. 2008, ACM: Kaiserslautern, Germany. p. 32-40.

SM 206 Scollo, G. and S. Zecchini. Architectural Unit Testing. 2005: Elsevier. SM 207 Børge Haugset, Geir Kjetil Hanssen,, Automated Acceptance

Testing:a Literature Review and an Industrial Case Study. SM 208 Schieferdecker, I. and T. Vassiliou-Gioles. Tool supported test

frameworks in TTCN-3. inEight International Workshop on Formal Methods for Industrial Critical Systems (FMICS'03), June 5, 2003 - June 7, 2003. 2003. Roros, Norway: Elsevier.

SM 209 Sauv\, J.P., et al., Easy Accept: a tool to easily create, run and drive development with automated acceptance tests, in Proceedings of the 2006 international workshop on Automation of software test. 2006, ACM: Shanghai, China. p. 111-117

SM 210 Sarma, M., et al., Model-based testing in industry: a case study with two MBT tools, in Proceedings of the 5th Workshop on Automation of Software Test. 2010, ACM: Cape Town, South Africa. p. 87-90.

SM 211 Saglietti, F. and F. Pinte, Automated unit and integration testing for component-based software systems, in Proceedings of the International Workshop on Security and Dependability for Resource Constrained Embedded Systems. 2010, ACM: Vienna, Austria. p. 1-6.

SM 212 Ruth, M.E., Concurrency in a decentralized automatic regression test selection framework for web services, in Proceedings of the 15th ACM Mardi Gras conference: From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities. 2008, ACM: Baton Rouge, Louisiana. p. 1-8.

SM 213 Ruilian, Z., M.R. Lyu, and M. Yinghua, Automatic string test data generation for detecting domain errors. Software Testing, Verification and Reliability, 2010. 20(Copyright 2010, The Institution of Engineering and Technology): p. 209-36.

SM 214 Rubinov, K., Generating integration test cases automatically, in

Page 113: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

111

Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering. 2010, ACM: Santa Fe, New Mexico, USA. p. 357-360.

SM 215 Jeng B, Forgacs I. An automatic approach of domain test data generation. The Journal of Systems and Software 49; 97-112, 1999

SM 216 Ren\, et al., Automating software tests with partial oracles in integrated environments, in Proceedings of the 5th Workshop on Automation of Software Test. 2010, ACM: Cape Town, South Africa. p. 91-94

SM 217 Richard Torkar, “Towards automated software Testing Techniques, classifications and frameworks”, Blekinge Institute of Technology, 2006.

SM 218 Andrews, J.H., et al. Tool support for randomized unit testing. in1st International Workshop on Random Testing, RT'06, July 20, 2006 - July 20, 2006. 2006. Portland, ME, United states: Association for Computing Machinery

SM 219 Arantes, A.O., et al. Test case generation for critical systems through a collaborative web-based tool. in 2008 International Conference on Computational Intelligence for Modelling Control and Automation, CIMCA 2008, December 10, 2008 - December 12, 2008. 2008. Vienna, Austria: IEEE Computer Society.

SM 220 Arantes, A.O., et al. Automatic test case generation through a collaborative Web application. inIASTED International Conference on Internet and Multimedia Systems and Applications, 17-19 March 2008. 2008. Anaheim, CA, USA: ACTA Press.

SM 221 Rice.Randell, “Surviving the Top 10 Challenges of Software Test Automation” Chicago Quality Assurance Association. May 18, 2004.

SM 222 Bertolino, A., et al. Automatic test data generation for XML schema-based partition testing. In 29th International Conference on Software Engineering, ICSE'07 - 2nd International Workshop on Automation of Software Test, AST'07, May 20, 2007 - May 26, 2007. 2007. Minneapolis, MN, United states: Inst. of Elec. and Elec. Eng. Computer Society.

SM 223 Briand, L.C., Y. Labiche, and S. He, Automating regression test selection based on UML designs. Information and Software Technology, 2009. 51(Copyright 2009, The Institution of Engineering and Technology): p. 16-30.

SM 224 Colin, S., B. Legeard, and F. Peureux, Preamble computation in automated test case generation using constraint logic programming. Software Testing, Verification and Reliability, 2004.

Page 114: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

112

14(Copyright 2005, IEE): p. 213-35.

SM 225 Feliachi, A. and H. Le Guen. Generating transition probabilities for automatic model-based test generation. In Third IEEE International Conference on Software Testing, Verification and Validation (ICST 2010), 6-10 April 2010. 2010. Los Alamitos, CA, USA: IEEE Computer Society.

SM 226 Hoffman Daniel, Strooper Paul, White Lee. Boundary values and automated component testing. Software Testing, Verification and Reliability 9, 3–26/99, 1999.

SM 227 Pargas Roy P., Harrold Mary Jean, Peck Robert R. Test-data generation using genetic algorithms. Software Testing, Verification and Reliability 9, 263–282, 1999.

APPENDIX C Systematic Literature Review studies

ARTICLE NUMBER ARTICLE NAME

SLR1 M. Fewster and D. Graham, “Software Test Automation: Effective Use of Test Execution Tools”. 1999.

SLR2 Choi, K.C. and G.H. Lee. Automatic test approach of web application for security .in ICCSA 2006: International Conference on Computational Science and Its Applications, Glasgow, United kingdom: Springer Verlag,2006.

SLR3 Tom Wissink, Carlos Amaro, "Successful Test Automation for Software Maintenance," ICSM, pp.265-266, 22nd IEEE International Conference on Software

SLR4 Burnim, J. and K. Sen, Heuristics for Scalable Dynamic Test Generation, in Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, IEEE Computer Society. p. 443-446, 2008.

SLR5 K.Karhu, et.al, “Empirical Observations on Software Testing Automation”, Lappeenranta University of Technology, International Conference on Software Testing Verification and Validation, 2009.

SLR6 C. Persson and N. Yilmaztürk, "Establishment of Automated Regression Testing at ABB: Industrial Experience Report on ‘Avoiding the Pitfalls’," in 19th International Conference on Automated Software Engineering (ASE’04): IEEE Computer Society, 2004.

SLR7 Lijun, S. and Z. Hong, Generating structurally complex test cases by data mutation: a case study of testing an automated modeling tool. Computer Journal, 2009. 52(Copyright 2009, The Institution of

Page 115: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

113

Engineering and Technology): p. 571-88.

SLR8 Alshraideh, M. (2008). A complete automation of unit testing for JavaScript program

SLR9 Roy Patrick Tan and Stephen Edward 2008, Evaluating Automated Unit Testing in Sulu.

SLR10 Bashir, M. F. and S. H. K. Banuri (2008), Automated model based software Test Data Generation System

SLR11 Berner, S., R. Weber, et al. (2005), Observations and lessons learned from automated testing.

SLR12 Coelho, R., E. Cirilo, et al. (2007). JAT: a test automation framework for multi-agent systems

SLR13 Dallal, J. A. (2009), Automation of object-oriented framework application testing.

SLR14 Dan, H., Z. Lu, et al. (2009). Test-data generation guided by static defect detection

SLR15 Fecko, M. A. and C. M. Lott (2002), Lessons learned from automating tests for an operations support system.

SLR16 Kansomkeat, S. and W. Rivepiboon (2003), Automated-generating test case using UML state chart diagrams.

SLR17 Leitner, A., H. Ciupa, et al. (2007). Reconciling manual and automated testing: The AutoTest experience.

SLR18 Liu, C. (2000). Platform-independent and tool-neutral test descriptions for automated software testing.

SLR19 Malekzadeh, M. and R. N. Ainon (2010, An automatic test case generator for testing safety-critical software systems.

SLR20 M. Fewster and D. Graham, “Software Test Automation: Effective Use of Test Execution Tools”. 1999

SLR21 B. Pettichord, “Seven Steps to Test Automation Success”, STAR West, San Jose, NV, USA, November 1999

SLR22 Bousquet, L. d. and N. Zuanon (1999). An Overview of Lutess: A Specification-Based Tool for Testing Synchronous Software.

SLR23 P.Pocatilu, “Automated Software Testing Process Economic Informatics Department”, Academy of Economic Studies, Bucharest, 2002

SLR24 Børge Haugset, Geir Kjetil Hanssen,, Automated

Acceptance Testing:a Literature Review and an

Industrial Case Study.2008

SLR25 Saglietti, F. and F. Pinte, Automated unit and integration testing for component-based software

Page 116: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

114

systems, in Proceedings of the International Workshop on Security and Dependability for Resource Constrained Embedded Systems. 2010, ACM: Vienna, Austria. p. 1-6.

SLR26 Rice Randell , “Surviving the Top 10 Challenges of Software Test Automation” Chicago Quality Assurance Association. May 18, 2004.

APPENDIX D Survey Questionnaire

I. Basic Information

This survey aims to study benefits and challenges of automated software testing. By completing this survey, you are helping to develop knowledge that can be used by the researchers and practitioners in the field of automated software testing. Thus, we also encourage you to invite your colleagues to participate in this survey by sending them this survey link: http://www.surveymonkey.com/s/ZYV66X6.The questionnaire has 24 questions in total and takes about 10 minutes of your time. All data entered is kept confidential. Only the researchers have access to the raw data which will not be shared or distributed. The results of the survey will be published in summarized form only. If you have any questions regarding this survey, please do not hesitate to contact us via email. Thank you in advance for your participation in the survey. Kind Regards, Dudekula Mohammad Rafi and Kiran Moses (Masters Students), Kai Petersen (supervisor) Software Engineering Research Lab Blekinge Institute of Technology Sweden (BTH) Email: [email protected], [email protected], [email protected]

II. The following questions capture the context of software development in

your organization

1. Name of your Company(optional):

2. Educational level

Educational Level A - Bachelors of Arts/Sciences B - Some Graduate Work C - Masters Degree D - PhD.

3. Role/Occupation (you can choose more than 1 type)

Business Analyst Programmer

Page 117: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

115

Project Manager Quality Assurance (Tester) Researcher System Analyst System Designer

4. What type of application domain does your software development team develop?

(You can choose more than 1 type)

Defense (military) Embedded System

ERP Finance Games & Entertainment Healthcare Mobile Telecommunication Web Other (please specify):

5. Software development methodology used

Software Development Methodology Response count A – Agile software development methods 71 B – Water fall Model 29 C - Lean 3 D - Other 12

6. How many years of experience do you have in automated software testing?

III The following questions are for teams that currently automate testing

7. Which test approach do you use when testing software?

Testing Approach Response count A – White Box Testing 30 B – Black Box Testing 63 C – OO testing 12 D - Other 10

8. What framework do you use to design your automated tests?

strategy A – Linear B – Data Driven C – Action Based/Keyword D – Hybrid E – No specific Method

Page 118: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

116

9. Tester should have enough technical skills to build successful automation

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 40

AGREE 53 UNCERTAIN 12 DISAGREE 7 COMPLETELY DISAGREE 3

10. Automated testing needs extra effort for designing and maintaining test scripts.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 37

AGREE 64 UNCERTAIN 7 DISAGREE 6 COMPLETELY DISAGREE 1

11. Automated Software Testing provides more confidence in the quality of the product

and increases the ability to meet schedules.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 25

AGREE 51 UNCERTAIN 19 DISAGREE 16 COMPLETELY DISAGREE 4

12. Automated testing can improve the product quality by better test coverage

Page 119: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

117

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 28

AGREE 59 UNCERTAIN 20 DISAGREE 7 COMPLETELY DISAGREE 1

13. High reusability of the tests makes automated testing productive

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 46

AGREE 53 UNCERTAIN 13 DISAGREE 3 COMPLETELY DISAGREE 0

14. Compared with manual testing, automated software testing requires a high

investment to buy the tools and train the staff to use the tools.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 32

AGREE 56 UNCERTAIN 12 DISAGREE 11 COMPLETELY DISAGREE 6

15. By having a complete automation it reduces the cost of software testing dramatically

and also facilitates continuous testing.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 21

AGREE 50 UNCERTAIN 17 DISAGREE 19 COMPLETELY DISAGREE 4

Page 120: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

118

16. Automated software testing requires less effort on the developer's side, but cannot

find complex bugs as manual software testing does.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 19

AGREE 49 UNCERTAIN 19 DISAGREE 24 COMPLETELY DISAGREE 4

17. Automated software testing saves time and cost as it can be re-run again and again

and they are much quicker than manual testing

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 29

AGREE 54 UNCERTAIN 15 DISAGREE 15 COMPLETELY DISAGREE 2

18. Automated software testing facilitates the high fault detection.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 9

AGREE 38 UNCERTAIN 35 DISAGREE 29 COMPLETELY DISAGREE 4

19. The investment in application-specific test infrastructure, can significantly reduce

the extra effort that test automation requires from testers

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 13

AGREE 63 UNCERTAIN 28

Page 121: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

119

DISAGREE 11 COMPLETELY DISAGREE 0

20. Automated software testing enables the repeatability of tests, which gives the

possibility to do more tests in less time

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 38

AGREE 59 UNCERTAIN 9 DISAGREE 5 COMPLETELY DISAGREE 2

21. Compared with manual testing, the cost of automated testing is higher, especially at

the beginning of the automation process. However, automated software testing can

be productive after a period of time

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 42

AGREE 60 UNCERTAIN 8 DISAGREE 4 COMPLETELY DISAGREE 1

22. The most of the automated testing tools available in the market are incompatible and

does not provide what you need or fits in your environment.

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 11

AGREE 40 UNCERTAIN 30 DISAGREE 24 COMPLETELY DISAGREE 10

Page 122: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

120

23. Does automated software testing fully replace manual testing?

SCALE FOR

SELECTION

RESPONSE COUNT

COMPLETELY AGREE 1

AGREE 6 UNCERTAIN 16 DISAGREE 49 COMPLETELY DISAGREE 43

24. What is your level of satisfaction in software test automation?

SCALE FOR SELECTION RESPONSE COUNT

HIGHLY SATISFIED 45

JUST SATISFIED 52

YET TO SEE REAL MAJOR BENEFITS OF TEST AUTOMATION

17

NOT AT ALL SATISFIED 1

Page 123: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

121

APPENDIX E RESEARCH TYPES

YEAR

REFERENCES COUNT

1999 SM65,SM77,SM78,SM79,SM80,SM167,SM168,SM169,SM170,SM171,SM215, SM226,SM227

13

2000 SM31,SM70,SM153 03

2001 SM01,SM32,SM63,SM69,SM101,SM110,SM173,SM176,SM181,SM201 10

2002 SM03,SM24,SM82,SM124,SM126,SM139,SM175 07

2003 SM14,SM59,SM72,SM136,SM147,SM150,SM172,SM177,SM208 09

2004 SM13,SM30,SM38,SM53,SM113,SM122,SM152,SM157,SM166,SM221,SM224 11

2005 SM04, SM10,SM28,SM46,SM62,SM68,SM91,SM96,SM97,SM116,SM130,SM161, SM163,SM180,SM190,SM191,SM194,SM202,SM206

19

2006 SM02,SM09,SM15,SM20,SM22,SM36,SM41,SM44,SM54,SM73,SM75,SM86,SM92, SM105,SM108,SM165,SM174,SM183,SM185,SM186,SM188,SM189,SM195,SM196, SM209,SM217,SM218

27

2007 SM06,SM07,SM12,SM17,SM19,SM21,SM37,SM40,SM42,SM45,SM47,SM49,SM58, SM89,SM93,SM94,SM106,SM107,SM112,SM121,SM138,SM144,SM146,SM151,SM155, SM184,SM203,SM204,SM207,SM222

30

2008 SM08,SM18,SM26,SM57,SM64,SM66,SM76,SM83,SM85,SM87,SM88,SM95,SM98,SM99, SM104,SM111,SM118,SM120,SM135,SM137,SM142,SM178,SM179,SM198,SM205,SM212, SM219,SM220

28

2009 SM11,SM16,SM29,SM35,SM39,SM43,SM51,SM52,SM56,SM60,SM61,SM74,SM90,SM102, SM109,SM115,SM117,SM125,SM131,SM132,SM133,SM134,SM140,SM143,SM148,SM158, SM162,SM164,SM182,SM187,SM192,SM197, SM199,SM223

35

2010 SM05,SM25,SM27,SM33,SM34,SM48,SM50,SM55,SM67,SM71,SM84,SM100,SM114,SM119, SM127,SM128,SM129,SM145,SM149,SM154,SM156,SM159,SM160,SM193,SM200,SM210, SM213,SM214,SM216, SM225

31

2011 SM103,SM123,SM141,SM211 04

Page 124: Automated Software Testing - DiVA portal830681/FULLTEXT01.pdfAutomated Software testing helps to decrease the work load by giving some testing tasks to the computers. Computer systems

122

APPENDIX F TESTING LEVELS

Testing level Article numbers frequency

Unit testing SM2,SM5,SM6,SM7,SM8,SM9,SM11,SM13,SM14,SM16,SM

18,SM21,SM23,SM24,SM25,SM27,SM28,SM29,SM30,SM3

2,SM34,SM40,SM41,SM46,SM47,SM48,SM49,SM51,SM52,

SM54,SM56,SM61,SM62,SM64,SM67,SM68,SM69,SM70,S

M71,SM73,SM79,SM81,SM83,SM84,SM85,SM86,SM87,SM

88,SM89,SM99,SM101,SM105,SM108,SM109,SM112,SM1

13,SM115,SM117,SM118,SM120,SM122,SM125,SM126,S

M128,SM130,SM144,SM133,SM139,SM144,SM148,SM149

,SM150,SM161,SM164,SM166,SM172,SM173,SM178,SM1

79,SM180,SM182,SM183,SM184,SM188,SM189,SM191,S

M192,SM193,SM194,SM195,SM197,SM201,SM203,SM205

,SM206,SM215,SM218,SM224,SM227

99

System level SM3,SM9,SM10,SM15,SM31,SM35,SM37,SM39,SM60,SM72,SM77,SM78,SM82,SM91,SM92,SM95,SM96,SM97,SM103,SM107,SM114,SM119,SM127,SM131,SM132,SM135,SM136,SM138,SM140,SM141,,SM142,SM145,SM152,SM153,SM154,SM155,SM156,SM159,SM162,SM163,SM165,SM170,SM174,SM186,SM187,SM190,SM202,SM204,SM208,SM210,SM219,SM220,SM222,SM226

54

Regression

testing

4,17,33,36,38,57,SM58,SM69,SM75,SM98,SM106,SM129,SM137, 146,151,160,172, SM176,SM196,SM198,SM199,SM221,SM225 210,212,223 210,212,223

13

Performance

testing

SM19, SM20,SM22, SM26, SM42,SM43,SM 50,SM55,SM111,SM123,SM134,SM143 ,SM147,SM213

14

Functional

Testing

SM1,SM12,SM44,SM45,SM53,SM59,SM63,SM65,SM74,SM76,SM80,SM90,SM93,SM94,SM100,SM102,SM104,SM110,SM116,SM157,SM171

21

Integration SM66,SM109,SM182,SM185,SM211,SM214,SM216

7

Acceptance

testing

SM207,SM209 2

ALL SM217,SM200,SM181,SM177,SM175,SM168,SM169,SM158,SM124,SM121,SM24

11