Posts Tagged ‘testing’

Tdd Is Only One Piece Of The Puzzle

Friday, November 7th, 2003

I posted this to the serverside in a thread about a TDD article by Dan North (http://www.sys-con.com/story/?storyid=37795&DE=1)

As Dan says, TDD is a tactical thing. When you’re in the middle of coding, it helps keep you from adding unnecessary complexity, and it let’s you see how your code is going to be used even before it is.

However, that’s only part of the story. The other pieces are enough design (up front or otherwise), and merciless refactoring later.

If you don’t continuously think about where you’re going and how things fit together, and if you don’t keep your code clean by refactoring constantly, all the testing in the world is not going to give you a good architecture. To come up with a good overall system architecture, especially on large projects, you do have to think at least a little about what direction you’re going to go in ahead of time. The beauty of good unit testing is that it’s not a big deal when you’re wrong (because you will be, even if you’re right today, you’ll be wrong when you get those new requirements.)

TDD helps you design the detailed stuff right the first time, and enables you to fix the bigger stuff later.

In a system with very low duplication and high automated test coverage, you don’t have to get it right the first time, because changing your mind later is about as expensive as changing it earlier. If you haven’t worked on such a system, you should, it’s such a different experience. No matter how dramatic a change you make, in a few seconds you know if you broke anything.

Only unit testing can give you a system like this, because any other testing introduces duplication. Say you have 500 automated end to end tests with 95% test coverage. Let’s forget the fact that it takes you an hour to run them all. Suddenly there are changes to the code that you can no longer make, because they would break so many of the tests that the team couldn’t afford the time to fix them all. So crud in the code builds up, or the tests get thrown away. Either way, BAD.

Unit tests on the other hand can be and should be as fine grained as possible so that most changes no matter how radical don’t affect many tests. You can use techniques like mocking to help in this. Ideally, a test should test exactly one method/class/small chunk of functionality, and it shouldn’t break if anything else changes.

And TDD does scale. I’ve seen teams of 30 developers with suites of 1000’s of unit tests that run in less than a minute. And their code stays maleable. When people think of a way to make it better, they can. In fact I wouldn’t work on a team that size if people weren’t writing tests, because the code would deteriorate into spaghetti.

For the record, I also am a huge advocate of having testers and other types of testing: acceptance, integration, end-to-end, functional, exploratory, etc. These all can come in very handy and find holes in your product, unit tests and requirements.

Using NFit

Monday, August 25th, 2003

So I’ve started committing some of the tweaks I’ve made to the .net version of FIT into
the sourceforge project Steve Freeman and I started at http://sourceforge.net/projects/nfit/.
I’d like to talk about how my team structures its code around FIT.

For context, this is a c# project, and we’re using VS.Net, NAnt, and !SLiNgshot to build, along w/ the usual suspects:
NUnit, !CruiseControl.Net, etc.

Directory Structure——
We have a directory structure that looks like :

  • doc/
  • Iteration1/
  • .html – all tests associated w/ a story
  • .html
  • Iteration2/
  • .html
  • Iteration3/
  • stories.html – list of all stories in an !AllFiles like fixture
  • lib/
  • src/
  • project 1/
  • project 2/
  • fit/
  • run.aspx – copied from nfit
  • custom fixture 1.cs – fixture used in a fit test
  • custom fixture 2.cs
  • project 3/
  • foo.sln – solution file

We’ve argued a lot over what the structure of the source should look like on a .net project, and our current solution is a src dir, inside of which is one dir for each project and a solution file. It’s flat and easy to see where everything is.

All the FIT tests go into the doc dir. fit is a VS web project that references all the other projects, along w/ the FIT dll’s. It includes all the fixtures our tests will need and the run.aspx file from NFit.

Running FIT Dynamically——
We’ve found it totally necessary to be able to run fit tests dynamically in addition to
running them from command line. Ward has his run.cgi on http://fit.c2.com/ to let you click ‘run’ on a fit test. I’ve just checked in Run.aspx into NFit which will let you do the same thing.

This is how we use it:

We have 2 webshares. */doc* points to our ‘doc’ directory and holds all the tests. */fit* points to our ‘src/fit’ directory and holds our copy of Run.aspx along w/ all the code our tests run against. We put a run link at the top of each test that points to ‘/fit/run.aspx’, and when we browse our tests we do it through our local IIS using something like ‘http://localhost/doc/stories.html’.

Because fit is just another project along w/ all our other projects in VS, we get linking,
debugging, and building for free inside and outside visual studio.

Structuring FIT Tests——
When you’re writing so-called ‘acceptance tests’, it’s important to remember that there are several different missions of testing and types of tests to achieve them. The way you structure your tests should reflect this, see default.WhatAboutAcceptanceTesting

In the doc directory, we group our tests in files by the story they test against, and the stories by the iterations they are completed (or scheduled to be completed) in. This of course makes the assumption that all tests “belong” to one and only one story. Up until now that’s been okay, we’ll probably add another place to put tests when and if that becomes a problem. It has been good to teach our team to think about testing functionality in terms the customer can see.

Now we could just run all our tests automatically, but we wanted one page that everyone could
go to to see our teams status. So we take the spreadsheet of stories that our client keeps complete with
their points, their iteration, and their status (done or not). And we point to it with an aspx page to
provide an html view that fit can run against. So when you go to http://localhost/doc/, you see a table for
every iteration, each of which contains all the tests in that iteration. Only ones that have been marked
done in the spreadsheet get run.

So we have one page that tells you the status of all stories we’ve ever done. And it tells you how we’re doing
on the current iteration as well – along w/ our velocity of course :)——How about if my return value is an XML document, how does FIT deal with that. How do I pass two parameters to a method.

TIA

– Yazid Arezki

What About Acceptance Testing

Monday, June 16th, 2003

So XP has always talked about the “two” kinds of testing : Unit and Acceptance. I am starting to question the validity of having two nice and neat forms of testing, both of which are fully automatable. I am spending this week with the likes of Brian Marick, James Bach, Cem Kramer, Lisa Crispin, and Brett Pettichord, and my eyes are being opened :) Apparently while I haven’t been paying attention, some very smart people have been working on the testing side of software just as much as we’ve been working on the development side of software, as I learned in “Testing 101”

Testing 101——In a very late night testing 101 discussion with Cem and James, we learned about several “missions” of testing :
Missions

  • valueable bug finding
  • change detection
  • ship decision support
  • contract fulfillment verification

and according to Cem, there are 11 different “styles” of testing
Styles

  • claims based testing
  • regression
  • exploratory/heuristic
  • scenario
  • user
  • state model based
  • high volume
  • risk-focused
  • domain
  • stress
  • function

A couple of the more interesting styles were exploratory and scenario testing.

Exploratory testing is a systematic directed way of approaching a program to find VALUEABLE bugs. It’s not easy, and it does find bugs that are not found by automated tests. This isn’t new, and someone, tester, developer, or analyst always does occasional exploratory tests on projects I’ve been on. Why? Because it finds bugs. What’s interesting is that we try to fool ourselves into thinking this these tests not necessary just because they are not automated. Some tests are actually better as occasional exploratory tests, like bringing a user in to see what they do. Like thinking about the riskiest bugs, then seeing if the existing tests would catch them. Like directed testing at new areas of the app where automated tests probably have holes. Like scenario testing (below)

Scenario testing questions the spec that you are writing against. It’s one more way of helping a customer to accurately represent all the stake holders of a project. This is really cool, and is a check that would probably have gone a long way toward making several projects I’ve been on more successful

A very interesting thing I heard from these testers was that they thought many function and domain tests belonged in developer unit tests, as a more appropriate place for them. This makes sense to me, if a tester wants a domain test for X why not pair with a developer on it?

How do acceptance tests fit into all of this?——Acceptance testing’s primary missions are communication & bug prevention. When they are automated, their secondary goal is change detection. The primary style of acceptance testing is claims based testing, because in XP not only do they test against the spec, but they ARE the spec They may also be function tests.

This leaves a LOT of ground uncovered. For instance, bug finding. Acceptance tests will (hopefully) prevent entire categories of bugs, most noteable those that involve developers not understanding customer requirements. But once that happens, what about other bugs? I’ve never had a project yet where we trusted our acceptance tests completely, and I would think it foolish to do so.

I have always used all my old acceptance tests as my regression test suite, but the problem with this is that it doesn’t scale. No matter how well you factor your tests, once you pass a certain point, test times start growing uncontrollably, and certain categories of changes start to break hundreds of tests. What if your regression tests didn’t HAVE to be your acceptance tests? What if they were a more carefully selected smaller subset?

BUT

Acceptance tests are good. By straddling the fence between customer and developer, they bring the two worlds together, They provide an executable spec.

More later…