Upload
xebialabs
View
38
Download
0
Tags:
Embed Size (px)
Citation preview
Using Jenkins & XL TestView to Ensure Quality in Continuous Delivery
Andrew Phillips, July 29 2015
2 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Agenda
▪ The Two Faces of CD
▪ Testing Challenges
▪ A Central Hub for Application Quality For Your Pipeline
▪ Demo
▪ Beyond Test Automation
3 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
About Me
▪ VP Products for XebiaLabs
▪ Lots of enterprise software development on high-performance systems
▪ Been on both sides of the “Dev…Ops” fence
▪ Active open source contributor and committer: Apache jclouds and others
▪ Microservices, reactive & Scala fan
▪ Regular meetup, conference etc. presenter
4 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
About XebiaLabs
We build tools to solve problems around DevOps and Continuous Delivery at scale
5 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
The Two Faces of CD
▪ A lot of focus right now is on pipeline execution
▪ …but there’s no point delivering at light speed if everything starts breaking
▪ Testing (= quality/risk) needs to be a first-class citizen of your CD initiative!
6 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
The Two Faces of CD
▪ CD = Execution + Analysis
7 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
The Two Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
8 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
The Two Faces of CD
▪ CD = Execution + Analysis
▪ = Speed + Quality
▪ = Pipeline orchestration + ..?
9 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
▪ Thousands of tests makes test sets hard to manage: − “Where is my subset?” − “What tests add most value, what tests are superfluous?” − “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
10 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
▪ Thousands of tests makes test sets hard to manage: − “Where is my subset?” − “What tests add most value, what tests are superfluous?” − “When to run what tests?”
▪ Running all tests all the time takes too long, feedback is too late
▪ Quality control of the tests themselves and maintenance of testware
▪ Tooling overstretch
11 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
▪ Many test tools for each of the test levels, but no single place to answer “Good enough to go live?”
▪ Requirements coverage is not available − “Did we test enough?”
▪ Minimize the mean time to repair − Support for failure analysis
JUnit, FitNesse, JMeter, YSlow, Vanity Check, WireShark, SOAP-‐UI,
Jasmine, Karma, Speedtrace, Selenium, WebScarab, TTA,
DynaTrace, HP DiagnosJcs, ALM stack AppDynamics, Code Tester for Oracle, Arachnid, ForJfy, Sonar, …
12 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
▪ Real go/no go decisions are non-trivial − No failing tests − 5 % of failing tests − No regression (tests that currently fail but passed previously) − List of tests-that-should-not-fail
▪ Need historical context
▪ One integrated view
▪ Data to guide improvement
13 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
Build Unit Acc. Tests Perf. Tests Deploy
14 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
Build Unit Acc. Tests Perf. Tests Deploy
15 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
Build Unit Acc. Tests Perf. Tests Deploy ?
16 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
Executing tests from Jenkins is great, but…
▪ Different testing jobs use different plugins or scripts, each with different visualization styles
▪ No consolidated historic view available across jobs
▪ Pass/Unstable/Fail is too coarse − How to do “Passed, but with known failures”?
17 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Testing Challenges
Executing tests from Jenkins is great, but…
▪ Different testing jobs use different plugins or scripts, each with different visualization styles
▪ No consolidated historic view available across jobs
▪ Pass/Unstable/Fail is too coarse − How to do “Passed, but with known failures”?
▪ Ultimate analysis question (“are we good to go live?”) is difficult to answer
18 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related to your current release
19 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related to your current release
2. …irrespective of where or by whom the information was produced
20 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different audiences and use cases
21 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
A Central Hub for Application Quality
What is needed:
1. A single, integrated overview of all the test (= quality, risk) information related to your current release
2. …irrespective of where or by whom the information was produced
3. The ability to analyze and “slice and dice” the test results for different audiences and use cases
4. The ability to access historical context and other test attributes to make real-world “go/no-go” decisions
22 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
How We Try To Tackle This
▪ XL TestView: test results management and analysis tool
▪ Collect results from – and control – all major test tools
▪ Visualize and analyze test results, trends, and correlation in real-time
▪ Optimize testing for value and speed
▪ Automate go/no-go decisions
23 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
XL TestView in the Overall Process
24 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Integrating XL TestView with Jenkins
25 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Analyzing Test Results
26 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Analyzing Test Results
27 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Historic View on Individual Test’s Results
28 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Identify Flaky Tests
29 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Tagging Tests
30 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Evaluating Go/No-‐go Criteria
31 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Beyond Test Automation
Can we go further? Can we use the aggregated test results, historical contexts and other attributes to invoke tests more intelligently?
32 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Beyond Test Automation
Can we go further? Can we use the aggregated test results, historical contexts and other attributes to invoke tests more intelligently?
It’s a bit of an open question:
▪ Google: it’s too expensive and time-consuming to run all the tests all the time - automated selection of a subset of tests to run
▪ Dave Farley: if you can’t run all the tests all the time, you need to optimize your tests or you have the wrong tests in the first place
33 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Beyond Test Automation
Middle ground:
▪ Label your tests along all relevant dimensions to ensure that you can easily select a relevant subset of your tests if needed
▪ Consider automatically annotating tests related to features (e.g. added/modified in the same commit), or introducing that as a practice
▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests (and then fix those flaky tests, of course ;-))
34 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Beyond Test Automation
Middle ground:
▪ Label your tests along all relevant dimensions to ensure that you can easily select a relevant subset of your tests if needed
▪ Consider automatically annotating tests related to features (e.g. added/modified in the same commit), or introducing that as a practice
▪ Use data from your test aggregation tool to ignore flaky/”known failure” tests (and then fix those flaky tests, of course ;-))
35 Copyright 2014. Confiden4al – Distribu4on prohibited without permission
Next steps – start now!
▪ Try it: Download XL TestView now: https://xebialabs.com/download/xl-testview/
▪ Visit the XL TestView product page for more info: https://xebialabs.com/products/xl-testview/
Thank you!