Original post - June 9th, 2010
The Cruise team has been using Twist for user acceptance tests for some time now. Like many teams we have had our fair share of hurdles with acceptance and regression testing. Over the years we have come to a process that works for us. Every team is different of course, so don't take this as a canonical process, but it works for us. We hope it will at least give you some ideas that you can try on your project.
Acceptance Test First?
Some teams have had good success with doing 'Acceptance test first'. When a developer pair pick up a Story, they sit with a Tester and the BA to develop an acceptance test that exercises the functionality of the Story. Ideally you can write a set of acceptance tests that are written in the language of the user, and exercise the real functionality of the system. Of course they fail, because the story is not yet implemented.
Then the developers go away and start working on the implementation. They write unit tests (test first of course) as they implement the story, and continue until the story is complete. Then they run the acceptance tests, and if they pass, the story is marked as done and passed to the testers or users for sign off. In this model, the acceptance tests are owned primarily by the BA and tester, and the developers should not change the acceptance tests without consulting them.
We tried this approach and found that it did not work for us. Since we are developing a product, many of the features we were developing are completely novel. Often changes are necessary to UI and even functionality while the story is being developed. We did not want to sacrifice the close working relationship between our UI designers, BAs and developers for a more formal 'throw it over the wall' approach, especially since we were all sitting right next to each other.
So we tried a different approach.
Acceptance Test After?
We like test-first. It is part of our DNA. But in some cases it may not be the right solution. We felt that we were spending too much time re-working the tests, and wasting the work that we had done in writing the acceptance tests first. We don't like waste. Our tester's time is very valuable (we only have one) and if we are just going to throw away most of that work then something is wrong.
So we tried a different approach. Instead of writing the tests first, we just had a discussion to kick off the story. This discussion takes no longer than 10 minutes, and involves the whole team. In that discussion we talk about the story and how it should work, and then the developer pair goes off and starts coding.
When they think they are done, they call for a 'mini-showcase' and the whole team stops work and gathers around their pairing station. They demonstrate the story, and then the rest of the team throw scenarios and corner cases at them. Most times they demo those corner cases and it works OK. If it doesn't, they get back to work to fix it and after some time they can call again for a mini-showcase. We manage to uncover a lot of the corner cases and bugs this way, and those bugs are fixed then and there, rather than stagnating in the backlog. Bugs that sit around even for a short time become much harder to fix, so this is good.
But we're not done yet! After the team accepts that the story is done, the developers sit down with the tester and discuss the pieces of the story that need to be added to the acceptance test suite. They take into account parts of the functionality that might be well covered by the unit tests, and which parts of the UI need special care since those kinds of interactions are fragile. Typically this means that we end up building UI tests that validate the key interactions rather than comprehensively exercising the entire UI.
This was a much better approach, but after a while Vipul, our tester, got frustrated. He complained that in practice he would not have had a conversation with the developers until after the mini showcase. Since he could never know when this would be, and there are so many stories in play, this meant that he only had a short time to review the story before being asked to give his opinion on what needed to be tested. It wasn't working.
Acceptance Test Before and After
It seemed that we had half the answer. The developers were happy, and the BA and UX felt that there was a good conversation going on during the story. But how to address the frustration that Vipul, our tester was feeling? We decided to move to a hybrid approach.
When starting a story, the developers would call out 'Story Kickoff!'. The whole team would gather around and have a conversation about the story. Then, before starting the actual development, the tester and the developers work together to design a sketch of the acceptance tests. These are not executable tests. We mark them as Manual tests in Twist, and add the Story number as a tag on the new tests and any other tests that are affected by the new functionality. This means that they don't get run as a part of the automatic test suite. The developers go off and start working on the story. Our tester might then go and add some more edge cases to the manual tests, since he knows that the story is in progress.
The developers have a better sense for when the story is done, since they have the manual tests to look at to identify the kinds of cases they should be considering. When the mini-showcase happens the team looks at the manual tests that have been written, and has a more concrete discussion about which ones to automate, and which ones to remove or keep as manual tests.
Our final process
Our final development process was like this:
Developers and/or Testers write manual Twist tests – after the story kickoff the developers and the Tester talk about the kinds of acceptance tests that should be automated. The Tester may write these using Twist. He tags them with the Story number and marks them as Manual tests.
Developers implement the Story – of course they write unit tests etc. as they implement the Story. During this process there may be changes made based on technical discoveries or time constraints.
Mini showcase with the whole team – when the developers think they are done they prepare for a demonstration of the feature and call out “Mini-showcase!”. Again the whole team gathers around their computer for the demonstration. They pull up the Story card in Mingle and demonstrate the features and acceptance criteria as written in the Story. Other team members can suggest scenarios and the developers will demonstrate them. It is quite common for bugs to be found during this process. If so, the developers continue implementing the story to address these issues.
Developers and Testers automate Twist tests – when the mini-showcase has been successful, the developers and the Testers discuss the appropriate level of automation for the Story. Often this means adjusting existing scenarios to use the new functionality rather than writing specifically targeted acceptance tests. For manual tests that were written for this Story we make sure they are marked as automated and implement them using Twist.
Note that although this process sounds quite involved, we do not model all these detailed steps in Mingle, our tracking tool. The process we describe here relies on the integrity and self-motivation of the development team. Modeling all these steps in Mingle would lead to a very complex cardwall, and we would lose the flexibility and ability to continually evolve. In fact, by the time you read this it will probably have evolved again.
Please sign in to leave a comment.