At Black Pepper we use test driven development (TDD) as a matter of course. We enjoy it and see the benefits all the time. For several years now we've taken this a step further and practised acceptance test driven development (ATDD), and I thought I'd share the ideas.
The concepts of ATDD are very similar to those of TDD: define a test or set of tests that verify the required piece of functionality and when those tests pass, you're done. The difference is that in ATDD the tests relate to the functionality defined in an agile story, not a unit of code.
These acceptance tests then become regression tests that can be run automatically in the same way as your unit tests. At Black Pepper we prefer to extend the concept of “refactor on green” such that we can't add new functionality until all the existing acceptance tests are passing.
When we pick up a story for development there is a conversation between the developers, QA, BA and business to determine the precise requirements for the story and drive out the acceptance criteria.
For example, let's take a simple story from a fictitious e-commerce application: “As a customer I want to see a list of my outstanding orders so that I can track their progress”. We can derive the following acceptance criteria:
- If the customer has no orders, the list of outstanding orders is empty
- If all the customer's orders are dispatched, the list of outstanding orders is empty
- If the customer has one or more orders that are not dispatched, they will be displayed in a list showing the order number, date ordered and status.
- Outstanding orders will be displayed in descending order of date ordered
We can use these acceptance criteria to write some acceptance tests. Of course, at this point they probably won't run. If this is a story to add new functionality, we will need to implement the functionality first, and may also need to add to the acceptance test framework in order to drive the application under test. If it's a change in existing functionality, the framework code may already exist.
Once we're satisfied that the functionality is complete, the acceptance test will pass and the story is ready for showcase.
If at some point a defect is discovered then our first step is to develop an acceptance test that recreates the defect. We then fix the bug, and when the test passes, we're done. The test becomes just another part of the acceptance test suite, providing a regression test.
Acceptance tests are executed automatically as part of our continuous integration pipeline. Once a build has passed unit tests it is promoted to the next step as a candidate for an acceptance test build. When the acceptance test builder becomes available it will pick up the latest successful unit test build and execute the acceptance tests, and when they pass, the build is considered a release candidate.
Now, we have a set of readily repeatable functional tests that verify the behaviour of our application. It frees up the QA function to do more exploratory and/or qualitative testing, and to think about what should be tested, rather than spending time performing repetitive manual tests.
As you can see, there's no rocket science here. The ideas are exactly the same as TDD for unit tests:
- write the test first
- write code to pass the test
- test automatically and often
Simple concepts, big benefits.
In my next blog I'll look in more detail at how we write the acceptance tests using a domain specific language (DSL) to make them accessible to the business in familiar terms.