Monday, April 23, 2007

Classifying tests

When defining a testing strategy, it's important to take a step back and look at what kinds of tests we would like to develop. Each type of test has a different scope and purpose, and can be developed by different roles in an organization. Most kinds of testing are concerned with verification and validation, also known as "V&V". Verification deals with lower level, "am I building the software right" kinds of tests. This usually entails some form of unit testing. Validation answers the question "am I building the right software", which focuses on higher level or business related tests. I've usually seen tests broken down into the following categories:

  • Unit tests
  • Integration tests
  • Functional tests

I've ignored regression and performance tests, because both of these tests take the form of any of the above test types. These test definitions come more from the agile school of testing.

Unit tests

So you just wrote a piece of code to implement some feature. Does it work? Can you prove it? If I come back tomorrow, can you prove it again? What about next week, or next month, or next year? If another developer modifies your code, can they prove it still works, and didn't break your functionality?

Unit testing answers the first two questions. Namely, does my code do what I think it's supposed to do? Continuous Integration will answer the rest.

Scope

The scope of a unit test is a single unit, usually a single public member. Unit tests should only test that unit, with no interaction with external dependencies. Unit tests can be defined as:

  • Isolated from external dependencies
  • Repeatable
  • Testing only one thing
  • Easily readable
  • Order independent
  • Side effect free
  • Having only one reason to fail
  • Written by the developer
  • Fast to run

If a unit test hits the database, then it's no longer a unit test because it depends on an external module. If the database goes down, I'm not testing the unit anymore, I'm testing to see if the database is up. Other external dependencies include:

  • Databases
  • HttpContext items (Querystring, session, etc.)
  • Web services
  • File I/O
  • Anything else that accesses the network or the disk

If I can remove external dependencies of a class during unit testing, I can greatly reduce the number of reasons for a unit test to fail. Ideally, my tests are also automated so they can be easily run. On my last project, we started writing unit tests on all new development, and regression tests on existing behavior. We wound up with over 1200 unit tests in about six months for about 30K lines of code, covering nearly 100% of new code (~98%) and about 50-75% of the existing codebase. It's also typical to see a 1:1 or 1.5:1 ratio of unit test code to production code in a team practicing TDD.

Purpose

Unit tests should test only one thing and have only one reason to fail. In a future post, I'll dive more into unit testing with examples. Since programmers are human and make mistakes on a daily basis, unit tests are a developer's back up to verify small, incremental and individual pieces of functionality are implemented the way the developer intended. Unit tests strive to answer the question "does my code do what I think it's supposed to do?"

Integration tests

I've written a suite of unit tests that verify the functionality I was trying to create. I've abstracted external dependencies away so I can make sure that my tests have only one reason to fail. But in production, the web service does exist. The database is there. Emails are sent out, queue messages sent, files created and read. Integration tests put the different pieces together to test on a macro scale. Integration tests can be defined as:

  • Tests for dependencies between objects and external resources
  • Repeatable
  • Order independent
  • Written by an independent source (QA)

In other words, integration tests are more end-to-end to determine if the entire system is functioning correctly.

Scope

As you can see, integration tests are more on the "macro" scope of testing. These tests can be written in a unit testing framework, but are also written in other frameworks such as FIT or WATIR. These are the tests to make sure that the database is written to correctly, that an email was sent, and a MSMQ message was written correctly.

Purpose

In the real world, we have external dependencies and we should test them. Integration tests verify that the code functions correctly in the presence of these external dependencies. We can test if a single module works with the external dependencies present, or if a group of modules works together correctly. Either way, integration tests are about putting the pieces together and determining if they fit.

Functional tests

We've developed some software and have tests that isolate dependencies and embrace them. But we're still not guaranteed that we've written the software that business has demanded. Having unit and integration tests is fine and dandy, but if our software doesn't meet business requirements, it's just been a huge waste of time and money. Functional tests are:

  • Developed by, or with the direct assistance of, the business owners
  • Focused on verifying business requirements
  • Written in a very high-level language such as a DSL

Scope

Functional tests are as end-to-end as tests come. These tests are performed on the topmost layers of an n-tier architecture, as these are the layers seen by the business or the customers. These tests are often the slowest, and can be a mix of automated and manual testing. These tests can also be used to sign off on project completion.

Purpose

Business has a set of requirements the software must meet, and functional tests are the way of verifying our software meets those requirements. Since requirements come in may forms, functional tests can be any kind of tests to verify those requirements, from "Does the website look good" to "When I check out, is the terms and conditions displayed".

Looking back, looking forward

I'm sure there are more types of testing we could look at, including user acceptance or user interface testing. These testing types go outside the scope of verification and validation, so I left them out of this post.

Automating your tests will give you insight to information such as code coverage and other code metrics, especially if they are part of a nightly or continuous integration build. As the scope of tests grows from unit to functional, and the number of bugs discovered should decrease as well. It's much easier to fix a bug in a unit test than in a functional test because of the scope of each type. If I have fewer places to look, I've cut the time to find and fix the bug dramatically. Since it isn't enough just to deliver functionality to the business, but to also create a clean and maintainable codebase, a solid suite of tests is absolutely critical for enabling changes for future business requirements. And if they were automated, that would be really cool too :).

No comments: