What About Acceptance Testing

So XP has always talked about the “two” kinds of testing : Unit and Acceptance. I am starting to question the validity of having two nice and neat forms of testing, both of which are fully automatable. I am spending this week with the likes of Brian Marick, James Bach, Cem Kramer, Lisa Crispin, and Brett Pettichord, and my eyes are being opened :) Apparently while I haven’t been paying attention, some very smart people have been working on the testing side of software just as much as we’ve been working on the development side of software, as I learned in “Testing 101”

Testing 101——In a very late night testing 101 discussion with Cem and James, we learned about several “missions” of testing :

  • valueable bug finding
  • change detection
  • ship decision support
  • contract fulfillment verification

and according to Cem, there are 11 different “styles” of testing

  • claims based testing
  • regression
  • exploratory/heuristic
  • scenario
  • user
  • state model based
  • high volume
  • risk-focused
  • domain
  • stress
  • function

A couple of the more interesting styles were exploratory and scenario testing.

Exploratory testing is a systematic directed way of approaching a program to find VALUEABLE bugs. It’s not easy, and it does find bugs that are not found by automated tests. This isn’t new, and someone, tester, developer, or analyst always does occasional exploratory tests on projects I’ve been on. Why? Because it finds bugs. What’s interesting is that we try to fool ourselves into thinking this these tests not necessary just because they are not automated. Some tests are actually better as occasional exploratory tests, like bringing a user in to see what they do. Like thinking about the riskiest bugs, then seeing if the existing tests would catch them. Like directed testing at new areas of the app where automated tests probably have holes. Like scenario testing (below)

Scenario testing questions the spec that you are writing against. It’s one more way of helping a customer to accurately represent all the stake holders of a project. This is really cool, and is a check that would probably have gone a long way toward making several projects I’ve been on more successful

A very interesting thing I heard from these testers was that they thought many function and domain tests belonged in developer unit tests, as a more appropriate place for them. This makes sense to me, if a tester wants a domain test for X why not pair with a developer on it?

How do acceptance tests fit into all of this?——Acceptance testing’s primary missions are communication & bug prevention. When they are automated, their secondary goal is change detection. The primary style of acceptance testing is claims based testing, because in XP not only do they test against the spec, but they ARE the spec They may also be function tests.

This leaves a LOT of ground uncovered. For instance, bug finding. Acceptance tests will (hopefully) prevent entire categories of bugs, most noteable those that involve developers not understanding customer requirements. But once that happens, what about other bugs? I’ve never had a project yet where we trusted our acceptance tests completely, and I would think it foolish to do so.

I have always used all my old acceptance tests as my regression test suite, but the problem with this is that it doesn’t scale. No matter how well you factor your tests, once you pass a certain point, test times start growing uncontrollably, and certain categories of changes start to break hundreds of tests. What if your regression tests didn’t HAVE to be your acceptance tests? What if they were a more carefully selected smaller subset?


Acceptance tests are good. By straddling the fence between customer and developer, they bring the two worlds together, They provide an executable spec.

More later…

Tags: ,

Comments are closed.