28
DECEMBER 2016 VOL X ISSUE 4

B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEM

BER 2

016

VOL X

ISSU

E 4

Page 2: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 2

Copyright 2016

LogiGear Corporation

All rights reserved.

Reproduction without

permission is prohibited.

Submission guidelines are located at:

http://www.logigear.com/magazine/calendar/2016-editorial-calendar-and-submission-guidelines/

Editor-in-Chief

Michael Hackett

Deputy Editor

Christine Paras

Graphic Designer

Tran Dang

Worldwide Offices

United States Headquarters 4100 E 3rd Ave, Ste 150

Foster City, CA 94404

Tel +1 650 572 1400

Fax +1 650 572 2822

Viet Nam Headquarters 1A Phan Xich Long, Ward 2

Phu Nhuan District

Ho Chi Minh City

Tel +84 8 3995 4072

Fax +84 8 3995 4076

Viet Nam, Da Nang 346 Street 2/9

Hai Chau District

Da Nang

Tel +84 511 3655 33

www.LogiGear.com

www.LogiGear.vn

www.LogiGearmagazine.com

Managing Editor

Tiffany McClure

Page 3: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 3

Our plan for the December LogiGear Magazine was to have a forward-looking Trends and Challenges issue. However, whilst assembling our September issue on SMAC, we realized the momentum SMAC was gaining in the industry. We had a large amount of content on our hands from a range of excellent contributors. Thus, we decided to split the SMAC stack in two parts: Part 1: Social and Mobile in September and now, Part 2 Analytics and Cloud in December.

We are delighted to announce we will make a one-time, special “Year Ahead: Trends and Challenges Issue” for January. We are very excited about our first Trends issue. Let’s see what it has in store for us!

The SMAC stack is so important to the future of software development, we chose to close out 2016 covering its four aspects. The expectation that SMAC will have a large business impact is real. Though many predictions say implementing these systems has already begun and are getting larger, most say the biggest period of SMAC development lies ahead. Enterprises, products, brands, B2C and B2B companies or even any organization wanting to interact and engage with its user and customers will be using the SMAC paradigm. Every enterprise from media, games, retail, banking, finance, insurance and so many others has already begun this transformation.

What are the important parts for test teams? Analytics and Cloud are all about the data. In the past, test teams often divided between teams that focused on functionality, web and app server performance, UI, user experience and test automation. A different team focused on data, data security, database access, security and performance. These separate sub-domains in testing are merged in the SMAC world. The analytics are tied to the social and mobile aspects so tightly they can’t be separated. Frankly, with product development so rapid and teams so lean, there is a tremendous career growth prospect for testers to dive into these areas and be more fluent in all four aspects of SMAC.

In analytics now, there are testing needs throughout. This is due to a rise of data science and data scientists generating massive amounts of data. That data is then spun into metadata to be used by product and business teams.

All this is predicated on cloud solutions for storage and easy access, as well as data infrastructure that also needs testing

The new field of data science is a lot more complicated than our old, basic testing of CRUD. ETL is now the buzzword. CRUD (create, read update and delete) is the foundation of data testing. Your ability to use a tool, often simply a SQL query analyzer for CRUD testing was the first entry point in data testing. Now ETL (Extract, Transform and Load) is the entry for your data storage system testing.

In this issue, we will kick off with Analytics and the Cloud of SMAC. Software testing specialist Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce the Blogger of the Month, Evgeni Kostadinov, who will guide us in testing data warehouses.

Our 2017 editorial calendar has been released!

Please see it here, along with submission

guidelines below. We welcome any publishing on

these topics.

Keep an eye out for our special one-time annual Testing Trends issue in January.

Happy Holidays and all of us at LogiGear Magazine wish you a New Year with growth, opportunity, learning, health and happiness!

LETTER FROM THE EDITOR

Page 5: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 5

IN THE NEWS

CANADIAN IMMIGRATION WEBSITE CRASHES FOLLOWING TRUMP ELECTION

LOGIGEAR ANNOUNCES GAME TESTING SOLUTON

At LogiGear, we embrace test automation wherever possible to help our customers release higher quality products faster. Major game companies have used LogiGear’s test automation tool, TestArchitect, to automate their games. TestArchitect possesses advanced image recognition technologies to help you automate your games from the user’s perspective, making it great for games testing. To learn more about the solution visit logigear.com.

Read more here

ONLINE SALES SOAR FOR BLACK FRIDAY Friday 25th November once again saw a wave of eager shoppers rushing to grab the discounted goods. But the real action was happening in the digital world as online shoppers had their fingertips ready.

The annual retail shopping bonanza called Black Friday went much smoother than last year, both in stores and online. Online shoppers in the US spent $5.27 billion by the end of Black Friday causing foot traffic in stores to fall. Although stores were still packed with shoppers, many confessed to having already purchased their products online and were only leisurely perusing. Read more here

In the hours leading up to a new elected president being announced for

America, Canada’s immigration website continually began to crash. Users

were walled by the frustrating "500 - Internal server error" web message

because the site could not sustain the large amount of traffic. It went down on

10.30pm Tuesday and became increasingly difficult to access thereafter. In

total, 200,000 users were on the website when it crashed, compared to 17,000

users at the same time the previous week. Read more here

LOGIGEAR MAGAZINE 2017 EDITORIAL CALENDAR

LogiGear Magazine has just released its editorial calendar for 2017. The magazine, published on a quarterly basis, dedicates each edition to a particular theme, one of relevance to the dynamic field of software QA. Our plan for 2017: March Back to Basics June Testing in Continuous Delivery September Test Automation December Outsourcing

We welcome content from seasoned as well as new authors, QA experts, test engineers and anyone who would like to share their knowledge and insights with the wider software test community. Submitted articles may be original works or ones previously posted on blogs, websites or newsletters, as long as you, the author, hold the rights to have such content published by us. Read more here.

Page 6: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 6

COVER STORY

THE A&C OF SMAC: ANALYTICS AND CLOUD

I n the last issue on testing the SMAC stack we talked about the social and mobile aspects of testing. We will be referring to

them in this article. In this issue part 2, we focus on the Analytics and Cloud aspect. The goal of this article is to understand a simple landscape of analytics and cloud.

Understanding the Flow Let’s look at a diagram to walk through a very basic analytics and cloud workflow. We are looking for testing points in the analytics and cloud parts of SMAC.

You have an app on your mobile phone. You execute some workflows or use cases. In addition to the

BY MICHAEL HACKETT

Page 7: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 7

functionality of the app, all of the activities send data back to the cloud. From the app you made, some examples could be:

What and how many various social interactions do you do?

How many Tweets, Instagram posts or Yelp posts? What payment methods do you use for purchases?

In what location are you using the app?

What is your connection? Wi-Fi, 3G or 4G?

All this data is captured and sent to the data store in the cloud.

Any associated IoT (Internet of Things) devices ranging from garden humidity sensors to heart monitors are gathering and sometimes streaming large amounts of data.

The data that is captured and stored will be defined by the business side of product development. The business will decide what part of the user’s behaviors is meaningful for analysis.

In the SMAC stack, all this data is stored in the cloud, in reality it does not have to be stored in the cloud or take advantage of cloud services, but for our SMAC discussion, it is.

How the data is stored is enough for a different examination: the pluses and minuses or cost of structured data and data warehouses vs. unstructured data and Hadoops is a business decision. The methods and tools to test those vary greatly and deserve their own article.

The data is analyzed and spun around by various algorithms, written by data scientists to capture a particular aspect of users and use that the business requires.

Let’s look at an example here to bring this workflow to more reality.

Let’s say my mobile phone has an app that controls

a watering device for the garden at my house. An

IoT device in my garden measures the soil moisture

content. The data from this device, depending on

the functionality in the app, might be correlated to

data from the national weather service that is

streamed and focuses on air temperature and

relatives humidity. This streaming data might be a

giant data set stored and manipulated in the cloud.

The resulting analytics are then sent back to the

business and the Dev team to pull the data they

COVER STORY

need to look at the user patterns to optimize

workflows and add or remove functionality.

At what points in the diagram on page 6 do we need

to insert some testing? Many!

Data Testing

Data testing is a well understood and traditional

area in software testing. Typically, data integrity

(accuracy and consistency), access and availability,

as well as all the defined functional and error

handling tests, will be run and automated.

Also, it is important to verify and test that the

analytics algorithms are working as well and as

expected. It is important to note, testing the correct

function of the algorithm is one set of tests, and

validating the data science behind the algorithm is

very different. The business and data scientists

design what the algorithm is collecting, sorting and

calculating. The business and data science

correctness of that is not what we are testing. It is

very important but it is not the software testing we

do. That doesn’t mean you would never test that. If

you are the subject matter expert and you are the

Page 8: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 8

COVER STORY

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), available in English, Chinese and Japanese, and Global Software Test Automation (HappyAbout Publishing, 2006).

He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

most knowledgeable in that domain, perhaps you

would test the data science, but more commonly

not.

Analytics The analytics gathered are very useful information

for test teams. A very common set of data is click

analysis. Click analysis is real usage, not a happy

path or modeled workflows- it’s what your users are

actually doing. They are real-world scenarios and

paths that must be tested and are the best to

automate since you know they are actual uses.

Check these workflows against your test case for

gaps. Depending on the data collected, check you

are testing with real user data, at real user peak and

low use times, testing on the correct devices, with

the right connectivity. The analytics must validate or

give you data to fix your test coverage and hopefully

reveal gaps, error handing (to be tested) and

boundary cases.

Cloud/ Data Warehouse/Hadoop As I mentioned above, how the data is stored—

whether in more traditional relational data bases in

data warehouses or in more modern Hadoops— is a

business decision but will have major implications on

how you test.

If you are a consumer of cloud services, in addition

to the variety of different tools and methods for

testing the competing infrastructures, you will be

testing normal data storage servers for security,

performance, load, concurrency and race

conditions.

The providers of cloud services will do most, if not

all the infrastructure testing here and have SLAs

(service level agreements) promising certain

performance and load benchmarks and security

attributes they meet. Cloud testing, in many cases,

is similar to traditional testing.

Summary It is important to fully understand the Social,

Mobile, Analytics and Cloud stack to test it

effectively. The analytics and cloud aspects are

mainly comprised of data, data manipulation and

how the data is stored in the cloud. The testing

focuses on the analytics algorithm functionality

testing, normal data integrity test as well as

common server tests- most commonly performance

and security tests. The infrastructure chosen for the

data in the cloud will dictate some significant

changes to that testing.

Page 9: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 9

By now, most enterprises have used or at least have

heard about cloud computing. However, with the

advent of Mobile technology and the rapid increase

in the number of Mobile users, the need for Mobile

Cloud computing is increasing fast. Before adopting

this new technology for your own business needs, it

is important to understand the benefits of Mobile

Cloud computing.

Accessibility | Collaboration

| Continuous testing Distributed teams are more and more common

nowadays. Cloud based test management platforms

make it possible for teams spread across different

locations to easily collaborate with each other. You

can log in from anywhere in the world, at any time,

on any device, and get access to the project. Testers

can easily test from different locations and access

test reports from anywhere in the world. A central

repository for all of your testing efforts that’s

updated in real-time makes it infinitely easier to

share important data, communicate with each other,

and track all of your testing efforts.

You can test 24 hours a day. A central server

connects to a series of machines located anywhere

you want. Each one of these host machines can host

up to ‘n’ mobile devices. Every device is made

available to anyone who connects to the server. A

tester in any company office can connect to the

cloud and select the device he wants to test his

application on. Say the day starts with European

testers, moving on to the North American team &

ends at the India QA team. This establishes a 24 hour

round the clock mobile testing process that won’t

stop until your app is on the market.

ARTICLE

T he mobile application ecosystem is very

dynamic. OEMs are launching new

devices and new customization, and

new OS versions are delivered every now and

then. This is the constant challenge that most

enterprises face.

As the new versions of devices and operating

systems create capabilities to expand your

application, it’s imperative to test your App

quickly over an ever expanding variety of devices so

your newer versions are as spotless as ever. To

achieve this, innovative testing techniques are

required to be implemented to ensure optimal

performance and user experience regardless of the

type of his handset, operating system, geographical

location and network service provider.

A cloud-based mobile App testing approach can be a

potential solution that can offer enterprises a

feasible & viable solution. Cloud-based testing offers

Web-based access to a large pool of real handsets

and devices connected to live networks spread

globally, providing enterprises with end-to-end

control for manual and automated testing practices.

BENEFITS OF MOBILE APP TESTING

IN THE CLOUD

BY DEEPANSHU AGARWAL

Page 10: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 10

ARTICLE

This gives numerous companies, especially startups,

a competitive edge. For instance, if they have a

globally dispersed team located at the opposite ends

of the world, they can still collaborate on the most

complex projects using cloud-based tools to test

their applications. All in all, this speeds up decision-

making, and hence helps in speedy delivery of the

project.

Benefits of Virtualization Cloud based tools bring in the benefits of

virtualization. They enable companies to make

optimal use of their resources with the result that

testing is more flexible and efficient. As applications

become increasingly complex, virtualization brings

in the benefit of resource sharing with reduced

capital costs.

Competitive price range When you compare cloud-based to the regular test

automation tools, you will find that the cloud-based

ones are available at a competitive price. This is

obvious from the fact that you need not spend a

considerable amount of money to upgrade the

hardware of your device(s). Moreover, the option of

‘pay as you use’ lets you use the tools only when it is

necessary, and therefore, saves on the costs later

when you are not using them. This works for most

companies, especially the ones who are looking to

cut down on their expenses.

Ease of access | User-

friendly Interface Cloud-based test automation tools are ready for

use the very moment you buy them. Easy access

through the Internet allows team members to work

from anywhere, anytime. No more installation

woes, setup requirements, hunting for servers, or

prepping of hardware to start using them. This

means that it reduces a lot of effort required from

the IT management teams and puts the focus back

on the core functionalities of an enterprise.

More often than not, cloud-based automation

tools have an incredibly user-friendly interface.

This makes them quite easy to use, even for novice

developers, as there is hardly any special training

required for the software.

Favors Continuous

Integration Continuous Integration – Every time you add a

piece of code, test it & then redeploy it. Cloud

testing is ideal for continuous integration.

The continuous integration platform orders the

tests on various devices within the mobile lab. If

they all pass, the mobile app can immediately

moves to production and release. Cloud

testing ensures that you can test under larger

scenarios right away. New builds can become new

versions faster than ever before, benefiting not

only the testing team but also the entire

development team as well.

Increase the Test Coverage

| Ensure Quality

Nonstop & parallel cloud testing gives you the

luxury of expanding the amount of scenarios you

Page 11: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 11

ARTICLE

can cover in the same time period. Cloud testing

environments offer highly synchronized and pre-

configured architectures. These pre-built

environments radically reduce the defects

associated with unstable test configurations.

With Cloud-based solutions, test your App across a

huge matrix of devices. This improves the quality of

tests to a great extent. Being able to maximize

the Test coverage in the minimum time becomes

critical to the success of the mobile application.

It’s Cost Effective In most cases, cloud applications have minimal or no

upfront cost. It is very much a pay-for-use service

which has helped to grow adoption of the model,

especially for SMBs. Without hefty fees for licensing

and upgrades, the cost of adoption is less of a barrier

when cash flow is an issue.

No need to buy duplicate devices even if you have

more than one testing team located in different

offices. Cloud based automation tools do not

involve expensive per seat licensing costs and

typically have less hardware requirements. This

implies minimal capital expenditure and

depreciation costs. No capital expenditure and

much faster deployment times means you can have

minimal project start-up cost and infrastructure

cost. Wear and tear on any device will reduce its

useful life, and increase a company’s depreciation

and amortization expenses. As long as all of the

work is being done remote, the devices sit in a

mobile lab untouched.

It’s Time Efficient Automation tools, in general, offer advantages of

high productivity and shorter test cycles. Cloud

based automation tools bring the additional

advantages of quick set up and tool deployment.

With cloud-based testing, there are no additional

needs to advanced testing tools, server

configurations, licensing, and testing resources. All

of these features allow you complete the testing

process within the stipulated time frame, or possibly

even before that. Unlike traditional tools, they do

not involve a lengthy set up and installation

process. Testing can begin almost immediately

from anywhere in the world. Faster testing reduces

the time to market which gives companies a big

competitive advantage.

Local Networks |

Worldwide reach Using cloud technologies can enable you to

evaluate the application’s global readiness &

conduct tests across the globe by replicating virtual

users in a variety of different locations to ensure

the app can handle users far and wide. You can test

your App over different local networks. Using the

cloud lets you connect any device to a host

machine in different parts of the World. With all of

these devices connected to the server, and

available to any tester, your team has access to all

of these local network carriers to determine

whether or not your application works well

anywhere.

Cloud Testing is perfect for the Mobile economy.

Cloud Testing is the only method that puts an

application through a rigorous process, making it

ready for the unexpected surprises that will come

in traffic and usage.

Parallel execution Coupled with the right test automation

tool, parallel execution enables you to run the

same tests on multiple mobile devices all at the

same time. Instead of being limited to the number

of USB ports in your computer, you can run a test

Page 12: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 12

ARTICLE

on 20, 30, 50 different devices all of their own

combination of sizes, versions, operating systems,

even under different simulated network conditions.

Performance Testing Using Cloud-based solution, scalable simulation of

virtual users is possible at significantly lower cost.

Using a cloud based approach to performance

testing, the hardware is all in the cloud, using

existing infrastructure. With this approach, servers

can be spun up to simulate thousands of virtual

users in minutes, with the charges based on a pay

for what you use model. Businesses can simply

schedule time for a test and resources are

automatically provisioned. Licenses are now much

more flexible for performance testing tools, with the

rise of open source tools allowing even greater

savings for comparable quality, when combined with

a cloud based load provider.

In addition, cloud-based services can provide a

diagnosis of any performance related issues when

they arise – giving teams the detailed diagnostics

they need to pinpoint the nature and location of the

problem in order to remediate quickly.

Prioritize your Device usage Cloud testing increases efficiency by prioritizing

devices. A centralized database gives the device

manager power to assign and reassign devices to the

testers. At any moment one project can become

top priority (it can be a popular application where a

bug was just found). It can be a minor application

that a competitor released and now you have to

have your version deployed to the market within

hours. Without any fighting, arguing, or

exchanging of physical devices the manager can

press a few buttons and projects with greater

urgency are immediately provided with what they

need.

Real-time Results Real-time visibility of testing is possible for project

teams. A cloud-based testing environment

provides real-time testing results, which means

defects can be analyzed while the tests are

running. This allows all members of the project

team to collaborate in real time on a test, often

including software suppliers – so that problems can

be identified and rapidly resolved.

Reliable | Reduced IT

management effort There’s a dedicated team working on the cloud-

based test management platform and they’re

contractually obliged to keep it up and running.

You can expect 24-hour support and you should

seek a contract where you’re compensated for any

downtime. Reliability should be much higher than

with a locally maintained solution that’s serviced

by a stretched internal IT department with a lot of

other things to attend to.

Cloud based tools cut down a lot of the IT

management tasks inherent to traditional tools like

installation, licensing, adding/replacing users and

simultaneous implementation of upgrades in

systems across geographies etc. With less IT

management to do, employees can focus on core

activities that can make a difference to a

company’s business.

Scalable – Up & Down It’s a simple fact that projects demand different

Page 13: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 13

ARTICLE

levels of testing at different points in their life cycle.

Whether you automate or bring in more testers to

shoulder the increasing burden, you want a

management tool that’s easy to scale up and

down. Cloud-based versions of the tools can be

used for functional testing, performance testing, and

many other testing types. In short, they can be used

as a complete test management tools. With a cloud-

base service you can forget about installation,

configuration, and maintenance woes. You have a

simple browser log-in that’s accessible on any

device, and you can add or remove licenses

whenever you need to.

Security is not an Issue Instead of accessing a third party mobile lab and

downloading pre-released application code outside

of your business, the right cloud testing tool works

within your business’ virtual private network (VPN)

so all the information about how your mobile app is

made never leaves the privacy of your company. Be

assured that security concerns are being addressed

by the higher-ups in the technology industry, which

makes these tools secure for usage.

Support Agile development Agile development is the concept that cross-

functional teams are involved through-out the

development process, and not a step by step

approach. Cloud testing empowers every member

with all of the tools at his fingertips regardless of

where he is or what he is working on at the current

moment. Cloud-based Mobile App Testing reduces

time to market and significantly augment testing

competence. Cloud-based test management tools

make you more agile and flexible, enabling you to

adapt and change direction quickly which is a

competitive business advantage.

Perfect for the Mobile

Economy Implementing a cloud-based testing approach

with agile testing methodologies for mobile

applications is the need of the hour to quickly and

cost-effectively respond to rapidly changing

markets, with a satisfactory level of service quality

and compatibility maintained across the wide

variety of available mobile devices.

An increasing number of app developers are

migrating from the in-house to cloud-based

development environments in order to build their

apps more cost efficiently, with the allure of lower

maintenance and operational costs. While these

are potential benefits for variety of app developers,

not all companies can rely on cloud-based

environments due to regulatory, security and

privacy risks. In fact, all app developers & testers

must carefully evaluate their needs before

committing to either approach to avoid

compliance issues and unforeseen expenses.

Deepanshu Agarwal is a Software Testing professional, Team Lead and Consultant with a proven track record of 6.5+ years’ of strategic QA vision.

An accomplished and result-driven professional with proven ability to direct and improve quality programs, thereby ensuring effective & effi-cient project delivery. He started blogging at 'Testing Mobile Apps' but later merged it into his other blog series at Software Testing Studio .

Deepanshu Agarwal t

Page 14: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 14

STRATEGIZE YOUR ’MOBILE APP TESTING’

WITH ’M-ANALYTICS’

ARTICLE

BY DEEPANSHU AGARWAL

C onvergence of Social Media, Mobile,

Analytics, & Cloud [SMAC] is one of the

hottest trends these days. It is a major

business agenda forcing organizations to rethink

their strategies and increase technology

investments in this direction.

What’s ‘Mobile Analytics’?

Mobile analytics involves measuring and analyzing

data generated by mobile platforms and

properties, such as mobile sites and mobile

applications. It lets you track, measure and

understand how your mobile users are interacting

with your mobile sites and mobile apps. You can

think of it as a superset of ‘Cookies’ which is used

for Web traffic. With mobile analytics data, you

can improve your cross-channel marketing

initiatives, optimize the mobile experience for your

customers, and grow mobile user engagement and

retention.

Users statistics – number of Users, their

characteristics, and where they come from

User behaviors – what actions your users are taking

Usage – Active users, sessions, session lengths,

frequency, retention, etc.

Technical – Devices, carriers, firmware versions,

errors

Collect comprehensive data for your mobile apps

on all major operating systems – Installs, launches,

average session length, and crashes

Visualize user navigation paths

Measure in-app payments and revenue

Geo-location analytics & targeting

Mobile engagement analysis

Mobile campaign analysis

Customized reports specific to your business

App store statistics

Page 15: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 15

ARTICLE

Real-world Usability Tests

With ‘Mobile Analytics’ your App users effectively

become testers. Individual profiles mean that you

can inspect both one-night-stand installs and loyal

users in order to determine what is successful and

what is not. Get a high-level summary of how users

interact with app content. Visualize the path users

traveled from one Screen or Event to the next. This

report can help you discover what app content

keeps users engaged with your app. See user

movement between Screens, Events, or a blended

view of both Screens and Events. With this added

information, Test teams can prioritize tests to focus

on more relevant real-world usability tests.

A/B Testing Results

For most apps, the largest single one-day decrease

in retention takes place on Day One; understanding

the points at which users churn out and addressing

them with A/B tests therefore has the most

potential in absolute terms for retaining users in the

crucial first moments after an app is launched.

The ability to analyze every aspect of your app

during an A/B test is the most dynamic and

informative way to ensure that you encourage

optimal user engagement. What if you already

know the ‘Grey’ area that has experienced a

statistically significant change based on the test?

This insight provides you with strong, math-driven

A/B results quickly and easily. This understanding of

behavioral analytics will also help you to identify

where bottlenecks in the app might exist, which will

have a profound effect on the kind of A/B testing

you will do to address them.

Performance Tests

Mobile users have zero patience for poor

performing mobile apps. Fix it or they’ll delete you.

Test teams can leverage Mobile Analytics to

analyze mobile app performance, stability, resource

utilization, network latency and other factors to

ensure an acceptable user experience.

Use App Speed reports to see how long different

requests take to load in your app. Or Crashes &

Exceptions report to identify the name and a brief

description of the top Exceptions, or technical

errors. You can define additional exception types

(like Network failures and empty search results) in

your app tracking code for more detail on other

exceptions. Evaluate the performance of each

screen in your app. Tracking scammers who

simulate installs on some kind of data center with

proxies and VPNs to tunnel out into a target

country.

Not just Testing | It’s Quality Assurance

In technical landscape ‘Quality Assurance’ has

replaced ‘Testing’. Now-a-days QA teams are not

just responsible for reporting defects but to

provide a deep App analysis and actionable

insights with strategic recommendations to

optimize App’s user experience – which features

to spend resources on, and which flaws to fix or

prioritize. User actions speak louder than words.

Discover what works, and what doesn’t. Mobile

Analytics allows data to be transformed first into

insight and then into product improvements. Say

in a location-based application, testing two

different ways to interact with a map will likely

increase use over time, thus impacting the entire

life-cycle of the application.

For a Mobile QA team the recommended

improvements to product features can be far

more substantial when they are abstracted into

best practices, made available to the entire

organization, and implemented proactively in

future development. Some improvements are so

basic and broad – such as the results of

fundamental A/B tests on pricing, tutorial flow, UI

placement, etc. — that they can be assumed to be

Page 16: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 16

ARTICLE

Mobile Analytics & Testing

Testing services too are witnessing a growth in S-M

-A-C with each evolving independently. In my

earlier post “Benefits of Mobile App Testing in the

Cloud” we looked at Mobile & Cloud integration for

a better Mobile App Test Strategy. Objective of

this post is to highlight the usage of ‘Analytics’ in

‘Mobile App Testing’!

The focal point of any Test strategy revolves

around the customer (the Use-case). Wouldn’t it be

better if, as a Testing team, you know your

customer segment? App-usage workflow? Priority

App features that customers use? What specific

action caused the app to crash? Which features do

users ignore, and why? Sounds good? Yeah!

‘Mobile Analytics’ can provide the answers to all

these questions and more. Analytics can give you

actionable insights into every aspect of your app

resulting in an ‘Effective’ Mobile App Test Strategy.

You can only Test so much

Traditionally, Mobile App testing takes place

before IT deploys an app, and in a controlled

environment with a subset of users. But in the

mobile era, it would be nearly impossible to test an

app on every possible device and operating system

combination a user could have. Consequently, the

data IT gathers during mobile app testing doesn’t

always reflect what will happen when an app goes

into production.

There is no such thing as version one-and-done of a

Mobile app. You’re always going to be iterating.

That’s where Mobile Analytics comes in. Real-time

data from Users who are using the app to perform

real business tasks offers a more complete picture

— and more actionable intelligence — than

traditional App testing alone.

Structured Testing

A single bad experience can result in end users

dumping the application. Using ‘Mobile Analytics’

in Mobile app testing, QA teams can adopt a

structured testing approach focused on end-users.

Structured tests help to proactively discover and

remediate a broad number of potential application

adoption issues – by exploring the app with a user

mindset, seamlessly report back on the defects,

provide feedback on the design, while slashing test

cycle time. Leverage mobile analytics with

actionable insights to improve your test

effectiveness. Optimize your test cases and device

testing matrix by continually measuring the user

experience.

Customize Tests based on User

Segments

Mobile Analytics helps you to see how different

user segments interact with your App.

Demographics, geographical location, platform,

device used and time zone are all common

examples of User segments that have different app

usage patterns. Armed with this information, you

can start tailoring your App Test Strategy for the

appropriate segments to ensure a positive

experience for all users.

Page 17: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 17

ARTICLE

universally true. Establishing awareness of them

throughout the organization can save untold time

and effort in the future, allowing products to get to

market faster and with higher quality.

Some Real Examples

1. Some XYZ App Users weren’t completing a

workflow, and they were all abandoning the app at

the exact same point — Recordings showed that

users pinched and zoomed on their touchscreens at

that point to more clearly see numbers they

needed to enter. But there was a button they

needed to press to confirm their entries on that

screen, and it disappeared when they zoomed in,

making it look like the app had frozen.

2. For dating apps — Use of push notifications to tell

the user that there’s something new, someone

they can date or someone who wants to meet

them etc. motivated them to open the app again.

3. An app’s response time for a certain action is a

fraction of a second longer than it should be.

Meanwhile, the analytics may also show that it

takes twice as long as it should for a user to process

an order in a customer relationship management

app.

4. Basic mobile app analytics report that a certain

app crashes 10% of the time, typically about 15

seconds after it’s opened. In-app analytics reported

that the crashes occur when users press the Add

New Customer button on one of the app’s sub-

menus. And the developer knew exactly which part

of the code to fix.

5. Why a user might have abandoned an app after

only a few seconds. Perhaps the menu system is too

confusing? The splash screen not impressive

enough? The loading time too cumbersome?

Mobile Analytics give QA teams the information

they need to determine whether products and

campaigns are successful — and how to fix them if

they’re not. They identify inefficiencies in Mobile

App and potential solutions. With in-app analytics,

QA can learn what causes apps to crash and what

features users like. With this insight, QA team can

streamline the app Test strategy, suggest

improvements to back-end performance and

identify bugs and poor design components.

Recognize that the data is important, but

understand what it means. While raw data is

important, it’s the organization and utilization of

the data – the analytics – that really drives the push

for mobile app companies to continuously strive for

application optimization. A lean comprehensive

overview of all of the aspects of your app in a quick

and efficient manner helps you strategize the

Testing solution.

Deepanshu Agarwal is a Software Testing professional, Team Lead and Consultant with a proven track record of 6.5+ years’ of strategic QA vision.

An accomplished and result-driven professional with proven ability to direct and improve quality programs, thereby ensuring effective & efficient project delivery. He started blogging at 'Testing Mobile Apps' but later merged it into his other blog series at Software Testing Studio .

Deepanshu Agarwal

Page 18: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 18

Thus these types of testing are again divided in the following sub-phases:

● Understanding of Requirements

● Development of Test Plan and Test Design

● Preparation of Test Cases

● Execution of Test Cases

● Reporting of the Test results As you can see, the first major phase is the Requirements Definition. Data used in a Data Warehouse originates from multiple source systems. Therefore, defining good requirements can be extremely challenging. Successful requirements are those structured closely to business rules and in the same time address functionality and performance. These business rules and requirements provide a solid foundation to the data architects. Requirements are one of the keys to success. The whole team should share the same tools from the project toolbox. It's important that everyone works from the same set of documents and requirements (i.e. version of files). We have to create a rich set of test data, avoiding combinational explosion of sets. The automation approach should be a combination of Task automation and Test automation. An alternative strategy is to use the source systems to create the artificial test data. That way a realistic but artificial source data is created. The testers should be skilled as well.

T o start with, we need a Test schedule. The same is created in the process of developing the Test plan. In this schedule,

we have to estimate the time required for testing of the entire Data Warehouse system. There are different methodologies available to create a Test schedule. None of them are perfect because the data warehouse ecosystem is very complex and large, also constantly evolving in nature. The most important takeaway from this article is that DW testing is data centric, while software testing is code centric. The connections between the DW components are groups of transformations that take place over data. These transformation processes should be tested as well, to ensure data quality preservation. The DW testing and validation techniques I introduce here are broken into four well defined processes, namely: 1. Integration testing 2. System testing 3. Data validation 4. Acceptance testing

HOW TO TEST YOUR DATA WAREHOUSE

BLOGGER OF THE MONTH

BY EVGENI KOSTADINOV

Page 19: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 19

Working with the concept of the Prototype model, the testing activities can be summarized in the form of the following checklist:

1. Multidimensional Schema Workload Test Hierarchy Test Conformity Test Nomenclature check Performance Test Early loading Test Security Test Maintainability Test

2. ETL Procedures Code Test Integrity Test Integration Test Administrability Test Performance/Stress Test Recovery Test Security Test Maintainability Test

3. Physical Schema Performance/Stress Test Recovery Test Security Test

4. Front-end

Balancing Test Usability Test Performance/Stress Test Security Test

In order to achieve our testing goals, we could

focus on the following two approaches:

1. Manual Sampling – The testing activities

covered under this approach are:

End-to-End Testing – It checks that data is

properly loaded into the systems from which the

data warehouse will extract data to generate

Reports.

Row count Testing – To avoid any loss of data, all rows

of data are counted after the ETL process to ensure

that all the data is properly loaded.

Field size Testing – It checks that the data warehouse

field should be bigger than the data field for the data

being loaded. If it is not checked it will lead to data

truncation.

Sampling – The sample used for testing must be a

good representation of whole data.

2. Reporting – The testing activities covered under

this approach are:

Report Testing – The reports are checked to see that

the data displayed in the reports are correct and can be

BLOGGER OF THE MONTH

DW verification overall testing phases include:

Data completeness. Ensure that all expected data is loaded.

Data transformation. Ensure that all data is transformed correctly according to business rules and/or design specifications.

Data quality. Ensure that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data.

Performance and scalability. Ensure that data loads and queries perform within expected time frames and that the technical architecture is scalable

Integration testing. Ensure that the ETL process functions well with other upstream and downstream processes.

User-acceptance testing. Ensure the solution meets users’/clients’ current expectations and anticipates their future expectations.

Regression testing. Ensure existing functionality remains intact each time a new release is completed.

TERMINOLOGY: TESTING GOALS AND

VERIFICATION METHODS

Page 20: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 20

used for decision-making purpose.

Next major part of our testing is to choose a

Test fixture strategy. I prefer to use Fresh fixture -

ensuring that tests do not depend on anything

they did not set up themselves. Combined with

Lazy (Initialization) setup and Shared fixture

(partitioning the fixture required by tests into two

logical parts). That is assuming that we are happy

with the idea of creating the test fixture the first

time any test needs it, we can use Lazy Setup in

the setUp() method of the corresponding Testcase

Class to create it as part of running the first test.

Subsequent tests will then see that the fixture

already exists and reuse it.

The first part is the stuff every test needs to have present but is never modified by any tests—that is, the Immutable Shared Fixture. The second part is the objects that any test needs to modify or delete; these objects should be built by each test as Fresh Fixtures. Most commonly, the Immutable Shared Fixture consists of reference data that is needed by the actual per-test fixtures. The per-test fixtures can then be built as Fresh Fixtures on top of the Immutable Shared Fixture). We also have the option called Minimal Fixture (i.e. use of smallest and simplest fixture possible for each test). Aiming at full post I have to mention the last option I would use, although it’s considered as Anti-pattern, is called Chained Tests (i.e. let the other tests in a test suite to setup the test fixture). It is important to consider the Pesticide Paradox - often test automation suffers from static test data. Without variation in the data or execution path, bugs are located with decreasing frequency. Automation should be designed so that a test-case writer can provide a common equivalence class or data type as a parameter, and the automation framework will use the data randomly within that class and apply it for each test run. The automation framework should record the seed values, or actual data that was used, to allow reruns and retesting with the same data for debugging purposes.

It can utilize a TestFixtureRegistry via dedicated table - it’ll be able to expose various parts of a fixture, needed for suites, via discrete fixture holding class variables or via Finder Methods. Finder Methods helps us avoid hard-coded values in DB lookups in order to access the fixture objects. Those methods are very similar to those of Creation Methods, but they return references to existing fixture objects rather than building brand new ones. We should make those immutable. NOTE: usage of Intent-Revealing Names for the fixture objects should be enforced in order to support the framework’s lookup functionality and better readability. To keep the current design the following implementation can be used - check if such registry already exists and remove it;

NOTE: in consideration must be taken the Data Sensitivity and Context Sensitivity of the tests that’ll rely on this module.

NOTE: take into consideration the Data Sensitivity and Context Sensitivity of the tests that’ll rely on this module.

Test Approach Phases The suggested test phases are based on the development schedule per project, along with the need to comply with data requirements that need to be in place when the new DWH goes live.

Phase 1: Business processes

Data Validation

Performance Test

Functional Test

Data Warehouse (internal testing within ETL

validating data stage jobs)

BLOGGER OF THE MONTH

Page 21: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 21

Data validation should start early in the test process and be completed before Phase 2 testing begins. Some data validation testing should occur in the remaining test phases, but to a much lesser extent. Important business processes where performance is important should be identified and tested (when available) in the Phase 1. Performance testing should be continued in the later test phases as the application will be continuously enhanced throughout the project. In addition to Phase 1 testing, there will also be unit and functional testing. As unit testing is completed for a program, the tester will perform functional tests on the program. While functional testing takes place with one program, the developer continues with redeveloping and unit testing the next program. Towards the end of Phase 1, the data warehouse team will be testing the data stage jobs. Unit testing should be completed first, then functional testing finishing a couple weeks afterwards. A final formal test will cap the end of Phase 1 testing.

Phase 2: Performance tests

Cross-functional process

Security Test

Data Warehouse (Repository testing and validation)

BLOGGER OF THE MONTH

Evgeni Kostadinov

Evgeni Kostadinov prefers the challenges associated with testing of various technologies. He has extensive experience with UI, API, DW, Performance and Mobile. He has worked mainly in projects with Java, .Net and NodeJS environments for Telecom, Financial, Marketing and Banking Institutions. He actively participates in the development of the company's QA and CI processes and infrastructure. Evgeni currently works as a QA Manager, technical trainer, as well as a QA Challenge Accepted lecturer. For more blogs from Evgeni, click here.

In addition to the previous tests, Phase 2 should also cover remaining test items that may not been tested in Phase 1 such as: Phase 2 testing will be important because it is the final testing opportunity that the functional area testers will have to make sure the DW load works as expected before moving to regression testing in Phase 3. Some performance tests and data validation should be included in this phase.

Phase 3: Regression tests

Phase 3 testing is comprised of regression test periods to test updates that are required as part of the Company gaming platform. The functional area testers should have sufficient time to test in each regression test period.

Phase 4: Acceptance tests

Phase 4 testing is limited. In addition to the functional area testers, end users will probably be involved in this final test before the new system goes live. In customer acceptance testing, no new tests should be introduced at this time. Customer acceptance tests should have already been tested in prior test phases.

Page 22: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 22

THE CHANGING DYNAMIC OF MEASUREMENT

AND ITS COMPETING FORCES

LEADER’S PULSE

BY MICHAEL HACKETT

I have been leading and managing

software projects for a long time.

Measuring the product and process has

always caused great consternation. At its worst

we measured everything that we could possibly

measure. We measured so many things that, in the

end, I always doubted whether it led to higher

quality software, or process improvement, when in

actuality it was to make managers feel “safer”. Of

course someone always had to keep track that the

bugs got closed and the priority of deferred bugs,

but in most cases the extent to which companies

tracked and made reports on those bugs was far

greater cost in time than the benefit we got from

them. I remember the days when Friday afternoon

was compiling all kinds of numbers and

measurements and metrics. All the test case and

bug measures. Test cases executed, tests yet to

execute. Automation runs. Passes and fails. New

bugs open, bugs close rates. Find fix rates. Metrics

reports turned into dashboards distributed to the

team- ignored by most. Few metrics on the

dashboard were even discussed. Did it really lead to

higher quality?

No other teams on the product Dev team got

measured or generated metrics on anything at all-

but testers got scrutinized.

What did the business want to know? How did they

decide when to release? Why wasn’t that the only

measure? How did you measure product readiness?

First, let’s distinguish measure from metric. There

are many definitions for these. I want to use a

description to make our discussion easier:

measurements are data. Metrics are derived from

data.

In quality and product development we have been

aware of many mottos about measurement:

If you can’t measure it, don’t do it!

What You Measure Improves

Quality guru Peter Drucker is attributed as

saying: "you can't manage what you can't

measure."

Page 23: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 23

product would be fixed or change dynamically

spontaneously or become acceptance criteria on the

user story. After that user story was done and

released into production, a bug or issue became a

support ticket issue or a new user story. So the idea

that people would actually be capturing and

measuring bugs and doing reports on those things

went out the window, which is the case in many

organizations. But I still find the most important

thing to remember, here and now for this

conversation, is that Scrum has only one

measurement by the book. It is typically for large

organizations which have a need for dashboards

that capture all kinds of data, for better or for worse.

The very foundation of development has changed.

The most important set of principles in Agile and

product development today are the Lean Practices.

For example: cut waste and empower the team. Cut

everything that is not absolutely necessary and let

the team decide what is and is not necessary. If the

team decides all those dashboards are a waste of

time and hold back productivity— cut them.

Discussion over.

Most teams and corporate have moved away from

dashboards. Still, the bigger the company the more

often the dashboards are bigger, less Lean and

frequently useless.

Developing software has

changed again- Along comes

DevOps

While many teams today are still wrestling with

Agile across the organization, DevOps is an

extension of Agile that does change how we get

software product out the door.

DevOps demands immediate feedback to Dev

teams. More than pure Agile/Lean, less than old

style dashboards, some type of feedback to the

team has to be generated. Whether that is simply a

conclusion of “the build passed” or “the build

failed”- some measurements need to be taken for

Why do we measure at all? Just for more busy work?

To assess product readiness or very differently team

efficiency? For more predictable projects?

The reasons for all these measures had better not

be to measure for trust of the team. If it is, there is

no measurement that will cure mistrust. We all

have to stop with useless, unused, not actionable

fluff.

On the other hand, I recently had a client who paid

for an automation project that was successful and

run often. But the manager wanted to stop the

project because he had zero idea what had been

done. The test team and outside consultants had

done a big and successful test automation program

but the managers had no idea and no visibility. This

was a problem.

I want to re-examine measuring in terms of the

complete change in how we develop product and

who is responsible for quality.

Developing software has

changed- Agile

Then came Agile to show how tired development

teams are of being measured. One of the Principles

of Agile is: “Working software is the primary

measure of progress.” So… stop with all the

dashboards.

By the book, Scrum has only one measurement:

burndown. Then you flip it for the velocity metric.

By the book, Scrum has no bug tracking system and

the issues found during the development of a

LEADER’S PULSE

Page 24: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 24

these assessments. There needs to be easy, quick

but meaningful measures of coverage. A part of

immediate feedback is no dashboards. Dashboards

are not immediate. But something has to be

measured and communicated to the team that

testing was completed and can progress to the next

environment in the pipeline.

The need for immediate feedback is paramount:

immediate feedback on builds and on the health of

the system. A few important things to remember

here are that immediate feedback needs to be built

around actionable data and the development or

business teams need to understand the readiness

for production, status and health of the system. It is

not information that the test team feels like giving,

such as numbers of automated tests executed or

percentage of test cases automated, bug find/fix

rates, or numbers of manual tests hours. Those

kinds of measurements are very old school and I

have not met a development team in a decade that

cares about that information. So what kind of

immediate feedback does the team need to get?

The answer is the numbers of blocking issues and

whether the automated suites passed or failed –

based on predetermine to pass/fail criteria.

A new aspect of measurements

In the old days, test teams developed numbers to

communicate to the team. Often project managers

would ask the team for certain measures. In Agile,

that stopped. The scrum master calculated

burndown then velocity. Now in DevOps, there

may be some measures Dev wants to progress in

the pipeline but more importantly, DevOps is a

business driven practice. The business side will ask

for any measurements that will prove or be

actionable for Continuous Delivery. The taking of

the measurements and metrics and the reporting

process cannot be a time drain or they need to be

rethought. Productivity is most important. Keep

it Lean.

It’s also important to look at when information is

captured to be reported. Pre-production

information which is what we were focusing on

versus post production information in the DevOps

world post-production information captured is

called continuous monitoring.

Post production: Continuous

monitoring

It is very common in DevOps to do a small amount

of testing in production. Those tests are designed

to run solely for assessing the operating system

and sometimes to find bugs in the functionality of

the live system. This is always a small set of tests.

The test team may run an automated suite to

monitor certain functionality or workflows on the

LEADER’S PULSE

Page 25: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 25

Michael Hackett

Michael is a co-founder of LogiGear Corporation, and has over two decades of experience in software engineering in banking, securities, healthcare and consumer electronics. Michael is a Certified Scrum Master and has co-authored two books on software testing. Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems (Wiley, 2nd ed. 2003), available in English, Chinese and Japanese, and Global Software Test Automation (HappyAbout Publishing, 2006).

He is a founding member of the Board of Advisors at the University of California Berkeley Extension and has taught for the Certificate in Software Quality Engineering and Management at the University of California Santa Cruz Extension. As a member of IEEE, his training courses have brought Silicon Valley testing expertise to over 16 countries. Michael holds a Bachelor of Science in Engineering from Carnegie Mellon University.

production system. This ensures that when a fail

happens on the production system the

development team knows about it before without

waiting for lost transactions or for a user to call

support.

Most continuous monitoring is done using

different, non-intrusive tools to capture system-

level and business measurements and would not be

the responsibility of the test team.

Thinking about Automation

has changed

Teams are finally understanding test automation

more today. Re-running a suite of automated tests

says less about the current quality of the system

than it does about consistency. If you miss the bug

first- every time you run the same automation- it

will miss that bug consistently. Automated test

suites— when they pass— tell teams the tests that

ran in the past gave the same result now. If there

are new bugs— the automated tests will not find

them. If there are issues, interactions, or

integrations not covered in the test suite which

fail— the automated suite will not find them. What

the automation tests hope to demonstrate is that

the automation tests suites will show the system

runs consistently or predictably with the last run .

Summary

There are competing pressures for measuring

testing today. The biggest pressure is Lean. Keep

measures to a minimum and don’t let the

measuring or reporting impact productivity. Be

Lean and effective.

Changes in software development practices have

had a big impact on measuring. Measuring should

be mainly about product readiness for progress

further in the pipeline. The measures a team uses

for this are often driven by what information the

business wants— not what test teams feel like

giving them.

LEADER’S PULSE

Page 26: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 26

GLOSSARY

Analytics: Analytics is the discovery, interpretation, and communication of meaningful patterns in data. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming and operations research to quantify performance.

Source: https://en.wikipedia.org/wiki/Analytics

A/B Testing: A/B testing, sometimes called split testing, is an assessment tool for identifying which version of something helps an individual or organization meet a business goal more effectively. A/B testing is commonly used in web development to ensure that changes to a webpage or page component are driven by data and not personal opinion.

Source: http://searchbusinessanalytics.techtarget.com/definition/A-B-testing

Data warehouse: In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered as a core component of business intelligence environment. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data and are used for creating analytical reports for knowledge workers throughout the enterprise.

Source: https://en.wikipedia.org/wiki/Data_warehouse

Business intelligence: Business intelligence, or BI, is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data. BI as a discipline is made up of several related activities, including data mining, online analytical processing, querying and reporting.

Source: http://www.cio.com/article/2439504/business-intelligence/business-intelligence-definition-and-solutions.html

Big data: Big data is an evolving term that describes any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information.

Source: http://searchcloudcomputing.techtarget.com/definition/big-data-Big-Data

Infrastructure as code (environment as a service): Environment-as-a-Service provides a comprehensive

approach to delivering turnkey application environments, giving organizations the flexibility to quickly deploy, maintain and decommission environments, while lowering the total cost of software delivery, and reducing the time required for delivery of new application releases.

Source: https://blogs.oracle.com/exalogic/entry/new_environment_as_a_service

Virtualization: Virtualization is the simulation of the software and/or hardware upon which other software runs. This simulated environment is called a virtual machine. There are many forms of virtualization, distinguished primarily by computing architecture layer. Virtualized components may include hardware platforms, operating systems (OS), storage devices, network devices or other resources.

Source: http://slideplayer.com/slide/8643948/

Service virtualization: In software engineering, service virtualization is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. It is used to provide software development and QA/testing teams access to dependent system components that are needed to exercise an application under test (AUT), but are unavailable or difficult-to-access for development and testing purposes.

Source: https://en.wikipedia.org/wiki/Service_virtualization

Cloud: The use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation.

Source: http://ibssbd.com/index.php/home/

cloud_computing

Hadoop: Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment.

Source: http://searchcloudcomputing.techtarget.com/definition/Hadoop

Page 27: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 27

EVENTS

CALENDAR OF EVENTS

Michael Hackett and Stephen Copp

Anki’s Stephen Copp and LogiGear’s Michael Hackett discuss the new reality of game

testing, the skills involved and where game testing is going in an online webinar that

happened earlier this month.

Webinar: Not your Mother’s Game Testing

The Software Testing Conference is set to take

place at the Renaissance Phoenix Downtown.

The Spring Software Testing Conference program will contain multiple workshops and sessions addressing the

latest trends and challenges facing Software Test and Quality Engineering professionals.

March 14th - 17th, 2017

Phoenix, AZ

The Software Testing Conference

June 4th - 9th, 2017

Las Vegas, NV

A conference designed to help you discover the latest in agile methods, technologies, tools, and leadership

principles.

The conference will be held in conjunction with Better Software and DevOps West who will present a host of in-

depth workshops.

Agile Dev West Conference

March 6th -10th , 2017

London, UK

Testing in Continuous Delivery

A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering

directors, and project managers who influence innovation in their teams.

Going onto its 11th year, the conference will take place on the 6th-8th and workshops are set to follow 9th—

10th March.

Page 28: B E R 2 C E M L X D E V O U I S S - LogiGear · Deepanshu Agarwal will be sharing his insight about Mobile app testing in the cloud and with analytics. We are pleased to announce

DECEMBER 2016 | VOL X | ISSUE 4 http://www.logigear.com/ 28

OVER

LOGIGEAR MAGAZINE

DECEMBER 2016 | VOL X | ISSUE 4

Vietnam, Ho Chi Minh City

1A Phan Xich Long, Ward 2

Phu Nhuan District

Tel +84 8 3995 4072

Fax +84 8 3995 4076

Vietnam, Da Nang City

VNPT Tower, Fl 7 & 8

346 Street 2/9

Hai Chau District

Tel: +84 511 3655 333

United States

4100 E 3rd Ave., Suite 150

Foster City, CA 94403

Tel +1650 572 1400

Fax +1650 572 2822