Tuesday, March 20, 2012

Testing, one, two, three…..

So these past few weeks have been more about the management side of my job, rather than the testing side; I was starving for some great testing topic to blog about.  Well, who would have thought that a great testing debate would literally come to my office door?    

A colleague of mine, a developer on another team whom I’ve worked with before, came by my office and asked for five minutes of my time to “Settle a testing debate between himself and another developer on his team.”

A break from the monotony of management?  A chance to engage in a discussion where the outcome was brining two developers into the warm, bright light of testing?  

Try and stop me. 

So the debate was, essentially this:

The first developer thought that the test suite they had should be re-organized so that the tests run in a specific order; establishing dependencies on the outcome of each test and then only running the next test should the previous one pass. 

The second developer thought that the work involved in doing so was way too much and that the first developer was “crazy” 

His words, not mine.

So they asked me, an impartial (read ‘not on their team’) observer and ‘testing expert.’ 
Again, their words, not mine. 

But I totally ate up the compliment and offered my voice to their debate.  I told them:

“You’re both right! And wrong!”

At this point, they were re-thinking brining me in on this, I’m sure.  So I explained what I meant.
Organizing your tests is a really good idea.  It will help to keep track of your coverage and identify areas, when the tests fail (or don’t met the expected results) to zero in on.  However, I told them, you’re crazy to make all the tests dependent on the ones before. 

Developer A, the test organizer proposer, had offered an example of what he was trying to do. 

“Say you have a function that squares two numbers and a test associated with it.  You’re not going to run that test if the test for the multiplier function, which the square function is using, fails.  So the tests should be ordered and linked somehow so we don’t waste time fixing a square function when the real issue is the multiplier function is broken!”

I was beginning to see that maybe we were looking at this the wrong way.  We should, I said, look at this not from the tests, but from the functions point of view.  What you want to do is not organize the tests, per se, but rather organize what they’re testing; group them by the functions being tested.  Since these tests were essentially core unit tests, this was probably the best way of doing it.

I then told them my planned idea for my own team and the couple of hundred tests case that I was looking at.  I also had to pull in my Sci-Fi geek cred at this point.  Ok, we’re a tester and two developers debating test methodology on a Friday afternoon; there was bound to be some geek gauntlets being thrown.

My idea, which I thought would work for them, was to identify the core functions of their system and the tests associated with them.  Call these your “Level One” tests, which should all pass prior to running your Level Two tests and so on. 

So when you test, it’s like you’re running a Level One Diagnostic.

“Nice. Star Trek”

Like I said, there was bound to be a Geek Out at any time here.

So test organization I really not about the test, but what you’re testing.  If you order your tests to verify the core, crucial pieces first, then move on to secondary, tertiary, etc. then you can ensure that the basics are solid before testing the complex.  This will help the developers as well to identify what was changed/broken/alerted in the last build.

Developer B, by the way, still thought it was a crazy amount of work to do.

No comments:

Post a Comment