.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

State Verification

The book has now been published and the content of this chapter has likely changed substanstially.
Please see page 462 of xUnit Test Patterns for the latest information.
Also known as: State-based Testing

How do we make tests self-checking when there is state to be verified?

We inspect the state of the system under test (SUT) after it has been exercised and compare it to the expected state.

Sketch State Verification embedded from State Verification.gif

A Self-Checking Test (see Goals of Test Automation on page X) must verify that the expected outcome has occurred without manual intervention by whoever is running the test but what do we mean by "expected outcome"? The SUT may or may not be "stateful" and if it is, it may or may not have a different state after it has been exercised. As test automaters, it is our job to determine whether our expected outcome is a change of final state or whether we need to be more specific about what occurs while the SUT is being exercised.

State Verification involves inspecting the state of the SUT after it has been exercised.

How It Works

We exercise the SUT by invoking the methods of interest. Then, as a separate step, we interact with the SUT to retrieve its post-exercise state and compare this with the expected end state by calling Assertion Methods (page X).

Normally, we can access the state of the SUT simply by calling methods or functions that return it's state. This is especially true when doing test-driven development because the tests will have ensured that the state is easily accessible. When retrofitting tests, however, we may find it hard to access the relevant state information. In these cases, we may need to use a Test-Specific Subclass (page X) or some other technique to expose the state without introducing Test Logic in Production (page X).

A related question is "Where is the state of the SUT stored?" Sometimes, the state is stored within the actual SUT while in other cases the state may be stored in some other component such as a database. In the latter case, State Verification may involve accessing the state within the other component (essentially a layer-crossing test) while Behavior Verification (page X) would involve verifying the interactions between the SUT and the other component.

When To Use It

We should use State Verification when we only care about the end state of the SUT and not how it got there. This is very often the case as it helps maintain encapsulation of the implementation of the SUT.

State Verification comes pretty naturally when we are building the software inside out. That is, we build the innermost objects first and then build the next layer of objects on top of them. Of course, we may need to use Test Stubs (page X) to control the indirect input of the SUT to avoid Production Bugs (page X) caused by untested code paths. Even then, we are choosing not to verify the indirect outputs of the SUT.

When we do care about the side effects of exercising the SUT that are not visible in its end state (its indirect outputs), we can use Behavior Verification to observe the behavior directly. We must be careful, however, not to create Fragile Tests (page X) by over specifying the software.

Implementation Notes

There are two basic styles of implementing State Verification:

Variation: Procedural State Verification

When doing Procedural State Verification, we simple write a series of calls to Assertion Methods that pick apart the state information into pieces and compare them to individual expected values. This is the "path of least resistance" taken by most people new to automating tests. The main disadvantage of this approach is that it can result in Obscure Tests (page X) due to the number of assertions it may take to specify the expected outcome. When the same sequence of assertions needs to be done in many tests or many times within a single Test Method (page X), we also have Test Code Duplication (page X).

Variation: Expected State Specification


Also known as: Expected Object

When doing Expected State Specification, we construct a specification for the post-exercise state of the SUT in the form of one or more objects populated with the expected attributes. We then compare the actual state directly with these objects using a single call to an Equality Assertion (see Assertion Method). This tends to result in more concise and readable tests. We can use an Expected State Specification whenever we have several attributes to verify and it is possible to construct an object that looks like the object we expect the SUT to return. The more attributes we have that need to be compared and the more tests that need to compare them, the more compelling the argument for using an Expected State Specification. In the most extreme cases, when we have a lot of data to verify, we can construct an "expected table" and verify that the SUT contains it. Fit "row fixtures" are a good way to do this in customer tests; tools like DbUnit are a good way to do this using Back Door Manipulation (page X).

When constructing the Expected State Specification, we may find it useful to use a Parameterized Creation Method (see Creation Method on page X) so that the reader is not distracted by all the necessary but unimportant attributes of the Expected State Specification. The Expected State Specification is most often an instance of the same class as we expect to get back from the SUT. We may have difficulty using an Expected State Specification if the object doesn't implement equality in a way that involves comparing the values of attributes (for example, by comparing the object references with each other) or if our test-specific definition of equality is different from what the equals method implements.

In these cases, we can still use an Expected State Specification if we implement a Custom Assertion (page X) that implements test-specific equality or we can build our Expected State Specification from a class that implements our test-specific equality. This class can either be a Test-Specific Subclass that overrides the equals method or a simple Data Transfer Object[CJ2EEP] that implements equals(TheRealObjectClass other). Both of these are preferable to modifying (or introducing) the equals method on the production class, a form of Equality Pollution (see Test Logic in Production). When the class is hard to instantiate we can define a Fake Object (page X) that has the necessary attributes plus an equals method that implements test-specific equality. These last few "tricks" are made possible by the fact that Equality Assertions usually ask the Expected State Specification to compare itself to the actual result rather than vice versa.

We can build the Expected State Specification during the result verification phase of the test immediately before it is used in the Equality Assertion or we can build it during the fixture setup phase of the test. The latter allows attributes of the Expected State Specification to be used as parameters passed to the SUT or as the base for Derived Values (page X) when building other objects in the test fixture . This makes it easier to see the cause and effect relationship between the fixture and the Expected State Specification which helps us achieve Tests as Documentation (see Goals of Test Automation). This is particularly useful when the Expected State Specification is created out of sight of the test reader.

Motivating Example

The natural example for this pattern is not very good at illustrating the difference between State Verification and Behavior Verification I have take the unusual approach of providing two complete sets of examples. I have chosen to do this because I wanted to provide a set of examples that illustrate the difference between State Verification and Behavior Verification but those examples might confuse readers trying to understand the basic variations of State Verification.

In this simple example we have a test that exercises the code that adds a line item to an invoice. Because it contains no assertions, it is not a Self-Checking Test.

   public void testInvoice_addOneLineItem_quantity1() {
      // Exercise
      inv.addItemQuantity(product, QUANTITY);
   }
Example NoAssertions embedded from java/com/clrstream/camug/example/test/InvoiceTest.java

To keep the test examples simple, we've chosen to create the invoice and product in the setUp method, an approach I call Implicit Setup (page X).

   public void setUp() {
      product = createAnonProduct();
      anotherProduct = createAnonProduct();
      inv = createAnonInvoice();
   }
Example SimpleSetup embedded from java/com/clrstream/camug/example/test/InvoiceTest.java

Refactoring Notes

The first refactoring we can do is not really a refactoring at all because we are changing the behavior of the tests (for the better): we introduce some assertions that specify the expected outcome. This results in an example Procedural State Verification because we do this inline within the Test Method as a series of calls to built-in Assertion Methods.

We can further simplify the Test Method by refactoring it to use an Expected Object. First, we build an Expected Object by constructing an object of the expected class, or a suitable Test Double (page X), and initialize it with the values that were previously specified in the assertions. Then we replace the series of assertions with a single Equality Assertion that compares the actual result with an Expected Object. We may have to use a Custom Assertion if we need test-specific equality.

Example: Procedural State Verification

Here we have added the assertions to the Test Method to turn it into a Self-Checking Test. Because there are several steps involved in verifying the expected outcome, this test suffers from a mild case of Obscure Test.

   public void testInvoice_addOneLineItem_quantity1() {
      // Exercise
      inv.addItemQuantity(product, QUANTITY);
      // Verify
      List lineItems = inv.getLineItems();
      assertEquals("number of items", lineItems.size(), 1);
      // Verify only item
      LineItem actual = (LineItem) lineItems.get(0);
      assertEquals(inv, actual.getInv());
      assertEquals(product, actual.getProd());
      assertEquals(QUANTITY, actual.getQuantity());
   }
Example InlineEqualityAssertions embedded from java/com/clrstream/camug/example/test/InvoiceTest.java

Example: Expected Object

In this simplified version of the test we are using the Expected Object with a single Equality Assertion instead of a series of assertions on individual attributes.

   public void testInvoice_addLineItem1() {
      LineItem expItem = new LineItem(inv, product, QUANTITY);
      // Exercise
      inv.addItemQuantity(expItem.getProd(), expItem.getQuantity());
      // Verify
      List lineItems = inv.getLineItems();
      assertEquals("number of items", lineItems.size(), 1);
      LineItem actual = (LineItem) lineItems.get(0);
      assertEquals("Item", expItem, actual);
   }
Example ExpectedObjectUsage embedded from java/com/clrstream/camug/example/test/InvoiceTest.java

Because we are also using some of the attributes as arguments of the SUT, we have chosen to build the Expected Object during the fixture setup phase of the test and use the attributes of the Expected Object as the SUT arguments.



Page generated at Wed Feb 09 16:39:38 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Result Verification Patterns"
State Verification
--State-based Testing
--Procedural State Verification
--Expected State Specification
--Expected Object
Behavior Verification
Custom Assertion
Delta Assertion
Guard Assertion
Unfinished Test Assertion