Upload
peter-modos
View
269
Download
1
Tags:
Embed Size (px)
Citation preview
SDL Proprietary and Confidential
How to improve your unit tests?
Peter Modos
12/12/2014
2
Image placeholder
Click on image icon
Browse to image you want
to add to slide
Agenda
○ Intro
○ Why do we need unit tests
○ Good unit tests
○ Common mistakes
○ How to measure unit test quality
○ Ideas to improve tests
Why to talk about unit tests?
4
Why to talk about unit tests?
○ because there are plenty of bad unit tests out there • and some were written by us
Why do we need unit tests?
6
We do NOT write unit tests to
○ meet management’s expectation: have X% test coverage
Image source: http://www.codeproject.com/KB/cs/649815/codecoverage.png
7
We do NOT write unit tests to
○ slow down product development
Image source: https://c1.staticflickr.com/3/2577/4166166862_788f971667.jpg
8
We do NOT write unit tests to
○ break the build often
Image source: http://number1.co.za/wp-content/uploads/2014/12/oops-jenkins-f2e522ba131e713d5f0f2da19a7fb7da.png
9
We do write unit tests to
○ verify the proper behavior of our software
○ be confident when we change something
○ have live documentation of the expected functionality
○ guide us while writing new code (TDD)
Image source: http://worldofdtcmarketing.com/wp-content/uploads/2013/12/change-same.jpg
Good unit tests
11
Good unit tests
○ properly test whole functionality of the target
○ focus only on the target class
○ use only public interface of the target class • do not depend on implementation details of the target
12
Good unit tests
Image source: http://zeroturnaround.com/wp-content/uploads/2013/01/PUZZLE-1.jpg
13
○ SUT: System Under Test • public interface
• implementation
○ public collaborators • constructor or other method param
○ private collaborators • implementation detail
• hidden dependency (e.g. static method call)
Never depend on the grey parts!
14
Good unit tests - maintainable
○ written for the future
○ easy to read • good summary of expected functionality (documentation of the target)
• good abstraction (separating what and how to test)
○ easy to modify
15
Good unit tests - test failure
○ failures are always reproducible
○ easy to find the reason of failure • informative error messages
○ fail when we change the behavior of code
○ fail locally: in the test class where functionality is broken
16
Good unit tests - test execution
○ 100% deterministic
○ fast test run • quick feedback after a code change
• thousands of tests in few seconds
○ arbitrary order of execution
○ no side-effects and dependency on other tests
○ environment independent
○ do not use external dependencies • files, DB, network, other services
Common mistakes
18
Common mistakes
○ no tests
○ slow not executed
○ non-deterministic not executed
red build is tolerated
19
Mistakes - part of the behavior is not tested
○ only happy execution path is tested • missing error cases
• exceptions from dependencies are not tested
○ edge cases are not tested • empty collection, negative number, etc.
○ returned value of target method is not verified
○ interactions with collaborators are not properly tested • e.g. returned values, correctness of parameters
○ high coverage, but no real verification
20
Mistakes - dependency on internals of target
○ accessing internal state • increasing visibility of private fields
• creating extra getters
• using reflection
○ increasing visibility of private methods • because test is too complex
21
Mistakes - hard to maintain
○ difficult to understand the tests • what is tested exactly
• how test is configured, executed
○ lot of duplication in test methods • too much details of irrelevant parts in main test method
○ difficult to follow up changes of target • interface change
• implementation change
22
Mistakes - lack of isolation
○ between test cases • dependency between tests (shared state, static dependencies)
• one test failure makes the others failing
○ hard to handle the dependencies of the target class
sign of design issues in target
• implicit dependencies
• dependency chain
• need to configure static dependencies
How to measure unit test quality?
24
Measuring the quality of unit tests
○ code coverage measurement tools
○ mutation test for unit tests • Pitest (https://github.com/hcoles/pitest)
○ code review
○ next developer working on them later • or you
Ideas to improve tests
26
Listen to the tests!
○ why is it difficult to test? • change tested code accordingly
○ usually a sign of code smells • tight coupling
• too much responsibility
• something is too big, too complex (class, method)
• too much state change is possible (instead of immutability)
• etc.
27
Use test doubles
○ to avoid dependency on collaborators’ real implementation • different opinions about how much to use
• mockist vs classical testing
• good summary on http://martinfowler.com/articles/mocksArentStubs.html
○ usage of a mocking library • speeds up test case writing
• enables behavior driven testing (vs state based testing)
○ but you can write your own stubs as well
28
Mockist mindset
○ using mocks for all collaborators (dependencies)
○ clearly reduce the scope of the test • change in collaborator’s implementation will not influence the test
○ easier setup, full control over dependencies • dependencies are passed with DI (preferably as constructor parameters)
○ drawback • integration of real objects is not tested in unit level
• change in implementation might result change in tests
– but only when collaborators change
○ recommended library • Mockito: https://github.com/mockito/mockito
29
Write tests first - TDD
○ lot of benefits • first thinking about functionality of target
• drives the design
– what dependencies are needed
– API of dependencies
○ very difficult to write tests afterwards • strong influence of existing code
• you will try to cover existing execution branches
• quite often the design is not (easily) testable
30
Organizing tests
○ usually one test class per target class
○ one test method tests exactly one interaction/scenario
towards the target
○ test methods might be grouped by functionality/tested
method • by using inner classes
• by same method name prefix
31
Naming test methods
○ let the name tell what is tested • clear summary what is tested in that scenario
○ do not worry if name is too long • better than explanation in comments
○ using underscore might increase readability
○ concrete format is less important, just be consistent
○ no ‘test’ prefix is needed • outdated convention of Junit
32
Naming test methods
○ <what happens>_when_<conditions of test scenario>
○ <what happens>_for_<method name>_when_<conditions>
○ <method name>_with_<conditions>_does_<what happens>
33
Naming test methods - example
@RunWith(Enclosed.class)
public class InMemoryAppTestRunRepositoryTest {
public static class GetAppTestRunMethod {
@Test
public void returnsAnEmptyOptional_when_testRunWasNotStoredWithTheRequestedId() {}
@Test
public void returnsProperTestRun_when_testRunWasStoredWithTheRequestedId() {}
@Test
public void returnsLastStoredTestRun_when_testRunWasStoredMultipleTimesWithTheRequestedId() {}
}
public static class GetAppTestRunsMethod {
@Test
public void returnsNoTestRuns_when_noTestRunsWereStored() {}
@Test
public void returnsLatestVersionOfAllStoredTestRuns_in_undefinedOrder_when_multipleTestRunsWereStored() {}
}
}
34
Naming test methods - example
35
Structure of a test method
@Test
public void name() {
<setup test conditions>
<call tested method(s) with meaningful parameters>
<verify returned value (if any)>
<verify expected interactions with collaborators (if needed)>
}
36
Content of test methods
○ keep test methods short and abstract • important to see what is tested
• the how is tested can be hidden in helper methods
○ extract all unimportant details to helper methods • prefer more specific helper methods with talkative name and fewer params
• these can call more generic methods to avoid duplication
○ but all important activities and parameters shall be explicit in the test method
• call to the tested method(s)
• concrete scenario specific part of test setup
• verification of expected behavior
37
Verification in test methods
○ types of verification • verify returned value of the tested method
• verify expected state change of the target object
• verify interaction with a collaborator
○ don’t have to repeat all verifications in all tests • different tests can concentrate on different part
• although if verification is compact, it won’t hurt to repeat
38
Interactions with collaborators
○ important to distinguish between • setting up test conditions
– we want a collaborator to return a given value / throw an exception
• verifying expected interactions
– verify calls only when there are no returned values
– order of interactions is usually important
○ parameters of calls • always expect concrete param values when they are relevant (most of the time)
– expect the same object if you have the reference in the test
– use matchers if you don’t have the reference but know important properties of the
parameter
39
Defining collaborators
○ we need an explicit reference to all collaborators • to be able to setup test conditions
• to be able to verify interactions
• if collaborators are created by the target object, we have no control over them
○ use dependency injection to pass them to the target object • as constructor parameter (preferred)
• as setter parameter
– dangerous because collaborators might change / not be initialized
○ if collaborators must be created by the target • pass factory object to create the dependency instead of direct instantiation
• we can mock the factory and have control over the returned collaborator
40
Value objects in tests
○ inputs to the given test • constructor parameters of tested object
• parameters of the tested method
○ play important role in the expected behavior • influence returned value
• influence interactions with collaborators
○ often worth to create new classes for the entities • instead of having Strings for many parameters
• enjoy benefits of compile time type safety
○ if you have too many parameters • group together the belonging entities in a new class
41
Value objects in tests
○ creation of value objects • use well named constants for very simple types/values
– simple objects, Strings, dummy numbers
• extract creation of instances to methods when their state is important
– when their state is read
– easier to change when constructor changes
– easier to change when you want to switch between mocked/real instances
• use mock fields when their state is not important
– when only their reference is used during interactions
42
Matchers
○ can be applied for • verification of returned values
• verification of an interaction: expected parameters
• setting up test conditions: for which parameters do you want a given answer
○ useful when • you don’t have a reference to an expected object
• you want to verify only some aspects of the object
• you want to allow any value for an irrelevant parameter
○ recommended library • Hamcrest (https://github.com/hamcrest)
• you can compose matchers and build your own
43
Avoid duplication in test classes
○ minimize the places where we have to change after • change in constructor/method parameters
• method renaming
• splitting a class
○ extract common logic to • test helper classes
• base test classes
• builder / utility classes in production code
44
Example test case: test condition setup
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
45
Example test case: tested method
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
46
Example test case: verification
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
47
Example test case: public collaborators
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
protected FvtRunnable createFvtRunnable(FvtConfig config, FasTestDeployment testDeployment) {
return new FvtRunnable(fasQaToolClientFactoryMock, paramsConverterMock, config, testDeployment);
}
48
Example test case: created collaborators
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
private void fasQaToolClientFactoryReturnsFvtClientFor(TesterServer testerServer, FasQaToolClient returnedClient) {
fasQaToolClientFactoryReturnsClientFor(testerServer, PROCESS_FVT, returnedClient);
}
}
49
Example test case: value objects
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
}
50
Example test case: created value objects
@RunWith(MockitoJUnitRunner.class)
public static class RunToolMethod extends FvtRunnableTest {
@Test
public void
startsFvtProcess_with_properParamsTowardsTheStagingServer_on_targetFasQaTools_then_returnsSuccessResultAfterProcessFinishes_when_no
LiveServersInDeployment() {
FasTestDeployment deployment = createTestDeploymentWithServers(stagingServerMock, testerServerMock);
FvtRunnable runnable = createFvtRunnable(configMock, deployment);
fasQaToolClientFactoryReturnsFvtClientFor(testerServerMock, fasQaToolClientMock);
paramsConverterReturnsParamsForConfigAndServer(configMock, stagingServerMock, processParamsMock);
TestResult result = runnable.runTool(progressMock);
assertThat(result.isSuccessful(), equalTo(true));
InOrder inOrder = inOrder(fasQaToolClientMock);
inOrder.verify(fasQaToolClientMock).startTool(processParamsMock);
inOrder.verify(fasQaToolClientMock).waitUntilToolTerminates();
}
private void paramsConverterReturnsParamsForConfigAndServer(FvtConfig config, Server targetServer,
Properties returnedProcessParams) {
when(paramsConverterMock.createProcessParams(config, targetServer)).thenReturn(returnedProcessParams);
}
}
51
Example for helper methods public static class StartToolMethod extends FasQaToolClientTest {
@Test
public void startsProperProcess_on_daClient_with_passedParams() throws TestRunException {
…
client.startTool(toolParamsMock);
verifyProcessWasStartedOnDaClient(INSTANCE_NAME, TOOL_NAME, toolParamsMock);
}
private void verifyProcessWasStartedOnDaClient(String expectedInstanceName, String expectedProcess, Properties expectedParams) {
verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_START, expectedParams);
}
}
public static class StopToolMethod extends FasQaToolClientTest {
@Test
public void stopsProperProcess_on_daClient() throws TestRunException {
…
client.stopTool();
verifyProcessWasStoppedOnDaClient(INSTANCE_NAME, TOOL_NAME);
}
private void verifyProcessWasStoppedOnDaClient(String expectedInstanceName, String expectedProcess) {
verifyProcessCommandWasExecutedOnDaClient(expectedInstanceName, expectedProcess, VERB_STOP, NO_PARAMS);
}
}
protected void verifyProcessCommandWasExecutedOnDaClient(String expectedInstanceName, String expectedProcess, String expectedVerb,
Properties expectedParams) {
verify(daClientMock).invokeProcessCommand(expectedInstanceName, expectedProcess, expectedVerb, expectedParams);
}
52
Control the uncontrollable
○ you won’t have full control if your tested class • use Random
• use time related functionality (e.g. sleep(), now())
• is used by multiple threads
○ solution: do not use them directly • Random: pass as parameter, maybe hide behind an interface
• time: move all time related functionality behind an interface
– production code: use System time, real sleep
– test code: own time, non-blocking sleep
• multithreading
– use Runnable, Callable and Executor framework
– test separately the main activity in single thread
– custom test implementation of collaborators to control concurrent environment
Recommended reading
54
Recommended reading
books
Growing Object-Oriented Software Guided by Tests
(Steve Freeman, Nat Pryce)
The Art of Unit Testing: with Examples in .NET
(Roy Osherove)
xUnit Test Patterns: Refactoring Test Code
(Gerard Meszaros)
Test Driven Development: By Example
(Kent Beck)
55
Recommended reading
blogs
Steve Freeman's blog
http://www.higherorderlogic.com
Kevin Rutherford's blog
http://silkandspinach.net
Google Testing on the Toilet
http://googletesting.blogspot.nl/search/label/TotT
Guide: Writing Testable Code (from Google)
http://misko.hevery.com/attachments/Guide-Writing%20Testable%20Code.pdf
… and plenty others