Friday, August 31, 2007

Legacy code testing techniques: subclass and override non-virtual members

One of the core techniques in Michael Feathers' Working Effectively With Legacy Code is the "Subclass and Override Method" technique.  Basically, in the context of a test, we can subclass a dependency and override behavior on a method or property to nullify or alter its behavior as needed.  This is especially useful if we can't extract a dependency outside of a class under test.  Mocking frameworks such as Rhino.Mocks allow me to verify interactions, or simply put in some canned responses for dependencies.

Since we're dealing with legacy code, paying down the entire technical debt at once isn't an option.  We have to make micro-payments by introducing seams into our codebase.  It might make the code slightly uglier at first, but as Feathers notes in his book, sometimes surgery leaves scars.

Some problem code

I need to override a method to provide an alternate, canned value.  However, in C#, members are not virtual by default.  Members have to be explicitly marked "virtual", otherwise subclassed members can only shadow base members.  For instance, suppose we have class Foo and Bar:

public class Foo : Bar
{
    public decimal CalculateDiscount()
    {
        return CalculateTotal() * 0.1M;
    }
}

public class Bar
{
    public decimal CalculateTotal()
    {
        // Hit database, webservices, etc.
        return PricingDatabase.GetPrice(itemId);
    }
}

I'm trying to test some class that uses Foo.CalculateDiscount method to perform some figure out what shipping methods should be available.  It uses the Foo discount to figure this out.  However, CalculateDiscount calls a base class method (CalculateTotal), which then goes and hits the database!  Not much of a unit test when it hits the database (or web service, or HttpContext, etc.).  Worse, I don't have access to the Bar class, it's in another library that we don't own.  Here's the test I'm trying to write:

[Test]
public void Should_add_shipping_option_when_discount_is_lower_than_50()
{
    MockRepository mocks = new MockRepository();

    Foo foo = mocks.CreateMock<Foo>();

    using (mocks.Record())
    {
        Expect.Call(foo.CalculateTotal()).Return(100.0M);
    }

    using (mocks.Playback())
    {
        string[] shippingMethods = ShippingService.FindShippingMethods(foo);

        Assert.AreEqual(shippingMethods[0], "OneDay");
    }
}

I could try to remove the dependency on the Foo class somehow.  But there's a lot of information that my shipping method needs, so I can't just pass in individual parameters for all of the Foo data.  The Foo class is actually quite large, so trying to extract an interface wouldn't really help either.  In any case, I just want to override the behavior of the "CalculateTotal" call, and since it's not marked "virtual", I can't override the behavior directly.  For example, this won't work in my test:

public class FooStub : Foo
{
    public new decimal CalculateTotal()
    {
        return 100.0m;
    }
}

The shipping service knows about "Foo", not "FooStub".  Since the FooStub.CalculateTotal is shadowing the Bar method, only when I have a variable of type FooStub will its CalculateTotal get called.  Since the shipping service knows Foo, it will use the Bar CalculateTotal method, even if I pass in an instance of a FooStub.  Shadowing can cause weird things like this to happen, to I try to avoid it as much as possible.

Subclass and override

So subclassing and overriding, one of the main functions of mocking frameworks, won't work at the test level.  I need to get the override on or between the Foo and Bar classes.  Why don't we create a new BarSeam class between the Foo and Bar classes?

public class Foo : BarSeam
{
    public decimal CalculateDiscount()
    {
        return CalculateTotal() * 0.1M;
    }
}

public class BarSeam : Bar
{
    public virtual new decimal CalculateTotal()
    {
        return base.CalculateTotal();
    }
}

public class Bar
{
    public decimal CalculateTotal()
    {
        // Hit database, webservices, etc.
        return PricingDatabase.GetPrice(itemId);
    }
}

The BarSeam class now sits between Foo and Bar in the inheritance hierarchy.  BarSeam shadows the Bar.CalculateTotal, with one key addition, the "virtual" keyword.  By making BarSeam.CalculateTotal virtual, I can now subclass Foo and override CalculateTotal, put in a canned response, and not worry about hitting the database.

Yes, BarSeam is ugly, but I want to bring ugly to the front of the class to make sure it doesn't get swept under the rug.  Since all client code either references Foo or Bar, no client code will be affected as BarSeam's default implementation merely calls the base method.  Think of it as Adapter, but instead of incompatible interfaces, I'm dealing with a non-extensible interface.

Conclusion

When working with legacy code, it's not feasible (or even wise) to try and rewrite the application.  By making these micro-refactorings and introducing seams, we're able to identify places in need of larger refactorings, as well as meeting our original goal of getting our legacy code under test.

Wednesday, August 29, 2007

Template Delegate Pattern

I've had to use this pattern a few times, most recently in Behave#.  It's similar to the Template Method pattern, but doesn't resort to using subclassing for using a template method.  Instead, a delegate is passed to the Template Method to substitute different logic for portions of the algorithm.

The pattern

Two methods in unrelated classes perform similar general algorithms, yet some parts of the algorithm are different.

Generalize the algorithm by extracting their steps into a new class, then extract methods for the specialized parts into delegates to be passed in to the new class.

Motivation

Helper methods tend to clutter a class with responsibilities orthogonal to its true purpose.  Additionally, several methods in one class might need to perform the same algorithm in slightly different ways.  While Template Method is concerned with removing behavioral duplication, algorithmic duplication can be just as rampant. 

Removing algorithmic duplication through Template Method can make the code more obtuse, as often the abstraction of the template class doesn't add any meaning to the system.  Additionally, sometimes duplicate algorithms are in completely separate classes, and it would be impossible or unwise to try to extract a common subclass from the two classes.  We can remove algorithmic duplication by consolidating the algorithm into its own method or class, and pass in parts of the algorithm that might vary.

Mechanics

  • Use Extract Method to separate the purely algorithmic section of each method from any surrounding behavioral code.
  • Compile and test.
  • Decompose the methods in each algorithm so that all of the steps in the algorithm are identical or completely different.
  • For each step in the algorithm that is completely different, use Extract Method to pull out the varying logic.  Name these extracted methods the same.
  • Compile and test.
  • Optionally, use Introduce Parameter Object for each extracted algorithm method that does not match the same number of parameters for the other different algorithm steps.  Compile and test.
  • If there is not an existing delegate type that matches the extracted variant methods of the algorithm, create a delegate type that matches both extracted methods.
  • Use Add Parameter and add a new parameter of the delegate type created earlier, and modify the algorithm to use the delegate method to execute the varying logic.  Repeat for both algorithm methods.  Each algorithm method should look exactly the same at this point, except for parameter and return types.
  • Compile and test.
  • Use Extract Class and Move Method to move the two algorithm methods to a new, generalized (and optionally generic) class.  Modify the calling class to use the new class and methods.
  • Compile and test.

Example

Suppose I find the two methods in different classes in a large codebase that do recursive searches.  One finds a Control based on an ID, and the other searches for XmlNodes based on an attribute value:

private Control FindControl(ControlCollection controls)
{
    foreach (Control control in controls)
    {
        if (control.ID == txtControlID.Text)
            return control;

        return FindControl(control.Controls);
    }
}

private XmlNode FindElement(XmlNodeList nodes)
{
    foreach (XmlNode node in nodes)
    {
        if (node.Attributes["ID"].Value == "4564")
            return node;

        return FindElement(node.ChildNodes);
    }
}

Both of these methods perform the exact same logic, a recursive search, but the details are slightly different.

In this example, the first step is already complete and each method contains only the algorithm I'm interested in.  From looking at these methods, it looks like there are 3 distinct parts: the loop, the comparison, and the actual matching logic.  The two differing parts I see in the algorithm are:

  • Match
  • Get the children based on the current item in the loop

I'll apply Extract Method to pull out the varying logic and name these methods the same.  Here are the extracted methods:

private bool IsMatch(Control control)
{
    return control.ID == txtControlID.Text;
}

private bool IsMatch(XmlNode node)
{
    return node.Attributes["ID"].Value == "4564";
}

private ControlCollection GetChildren(Control control)
{
    return control.Controls;
}

private XmlNodeList GetChildren(XmlNode node)
{
    return node.ChildNodes;
}

Note that the names of each method is the same, as well as the number of parameters.  Now the original algorithm methods call these extracted varying methods:

private Control FindControl(ControlCollection controls)
{
    foreach (Control control in controls)
    {
        if (IsMatch(control))
            return control;

        return FindControl(GetChildren(control));
    }
}

private XmlNode FindElement(XmlNodeList nodes)
{
    foreach (XmlNode node in nodes)
    {
        if (IsMatch(node))
            return node;

        return FindElement(GetChildren(node));
    }
}

These algorithm methods are starting to look very similar.  Next, I need to find a delegate type to represent the varying methods, namely "IsMatch" and "GetChildren".  Since I'm working with Visual Studio 2008, some good candidates already exist with the Func delegate types.  I like these delegate types as they are generic and may lend to some better algorithm definitions in the future, so I'll stick with Func.  Here's the FindControl method after I use Add Parameter to pass in the varying algorithm logic:

private Control FindControl(IEnumerable<Control> controls, 
    Func<Control, bool> predicate,
    Func<Control, IEnumerable<Control>> childrenSelector)
{
    foreach (Control control in controls)
    {
        if (predicate(control))
            return control;

        return FindControl(childrenSelector(control), predicate, childrenSelector);
    }
}

I changed the type of the "controls" parameter to IEnumerable<Control> from ControlCollection, to reduce the number of types seen in the algorithm method.  I change the client code of this algorithm to pass in the new parameters, compile and test:

private void SetLabelText()
{
    string text = txtLabelText.Text;
    string controlID = txtControlID.Text;

    if (string.IsNullOrEmpty(text) || string.IsNullOrEmpty(controlID))
        return;

    Control control = FindControl(page.Controls, IsMatch, GetChildren);
}

Note that the "IsMatch" and "GetChildren" are method names in this class, so I'm passing the matching and children algorithms to the FindControl method for execution when needed.  Finally, I use Extract Class and Move Method to move the two virtually identical algorithm methods to a single method on a new class.  Here's the final result, with some minor changes to use extension methods:

public static T RecursiveSearch<T>(this IEnumerable<T> items,
    Func<T, bool> predicate,
    Func<T, IEnumerable<T>> childrenSelector)
{
    foreach (T item in items)
    {
        if (predicate(item))
            return item;

        return RecursiveSearch(childrenSelector(item), predicate, childrenSelector);
    }

    return default(T);
}
 

I've made the method generic, as the only final variant between the extracted algorithm methods was the type of the item I was finding.  By making the method generic, the return type is strongly typed to the item I'm searching for.  The final client code doesn't look too much different from earlier:

private void SetLabelText()
{
    string text = txtLabelText.Text;
    string controlID = txtControlID.Text;

    if (string.IsNullOrEmpty(text) || string.IsNullOrEmpty(controlID))
        return;

    Control control = page.Controls.Cast<Control>().RecursiveSearch(IsMatch, GetChildren);
}

The client code for the "FindElement" looks exactly the same, and now there is no duplication of the recursive search logic.  I've extracted the varying logic into methods which can be passed in as delegates to the new, generic extracted algorithm.  Since I'm using C# 3.0, I can go as far as using lambda expressions instead of the extracted "IsMatch" and "GetChildren" methods.

Conclusion

The Template Delegate pattern is used quite extensively with the new Enumerable extension methods (such as the Where method).  With delegate creation in C# 3.0 becoming much simpler with lambda expressions, it's becoming easier to compose generalized algorithms where varying portions are passed in as delegate parameters.  By using the Template Delegate pattern, I can reduce the number of junk algorithmic methods into a single generic class that serves the needs of current and future client code.

Monday, August 27, 2007

Agile references for PMs

By "PM", I'm referring to Project Managers.  Adopting Agile can be a scary proposition for those entrenched in waterfall processes.  I have a lot of sympathy for PMs whose dev team decides to switch to Agile out from under them.  PMs need not be left behind, and in fact, have a very valuable role in Agile development, just not what they might be used to.  I see it as a re-education on the reality of software development, that Gantt charts don't define reality, they distort and mislead those trying to make decisions based on reality.  Here are a list of references that will help those on the PM side try to make sense of those crazy developers and their Agile ideas.

Eliminating the PM role is ultimately a mistake for a dev team moving to Agile, as someone eventually has to answer the $$$ questions.  Putting the onus on the development team/organization to determine costs, staffing, direction, etc., can drag their focus away from delivering business value.  Not having a PM on your team (or reducing the role of the PM) is the quick-and-dirty fix to a dev team's Gantt chart nightmares, but eliminating that role won't address the business needs of having the role in the first place.

Authoring stories with BDD using Behave# and NSpec

A question came up on the Behave# CodePlex site, asking about the intent of Behave# and where it fits with NSpec.  BDD is all about using TDD to create specifications, and NSpec bridges the gap with a specification-friendly API.  For me, the question of when to write Behave# stories and pure NSpec specifications is fairly straightforward.

The stories and scenarios created by the business owner should be captured in Behave# stories and scenarios.  The specifications of everything else should be captured by NSpec (or NUnit.Spec, or NUnit constraints).

But that's not stopping us from using NSpec for the "Then" fragment of a scenario.

Anatomy of a scenario

A scenario is composed of three distinct sections: Context, Event, and Outcome.  The scenario description follows the pattern "Given <context>, When <event>, Then <outcome>.  A sample scenario (and story) could be:

Story: Transfer to cash account

As a savings account holder
I want to transfer money from my savings account
So that I can get cash easily from an ATM

Scenario: Savings account is in credit
	Given my savings account balance is $100
		And my cash account balance is $10
	When I transfer to cash account $20
	Then my savings account balance should be $80
		And my cash account balance should be $30

My outcomes are the "Then" fragment of the scenario, but could also be interpreted as specifications for an account.

Using NSpec with Behave#

So how can we combine NSpec with Behave#?  Here's the story above written with NSpec and Behave#:

Account savings = null;
Account cash = null;

Story transferStory = new Story("Transfer to cash account");

transferStory
    .AsA("savings account holder")
    .IWant("to transfer money from my savings account")
    .SoThat("I can get cash easily from an ATM");

transferStory
    .WithScenario("Savings account is in credit")

        .Given("my savings account balance is", 100, 
                delegate(int accountBalance) { savings = new Account(accountBalance); })
            .And("my cash account balance is", 10, 
                delegate(int accountBalance) { cash = new Account(accountBalance); })
        .When("I transfer to cash account", 20, 
                delegate(int transferAmount) { savings.TransferTo(cash, transferAmount); })
        .Then("my savings account balance should be", 80, 
                delegate(int expectedBalance) { Specify.That(savings.Balance).ShouldEqual(expectedBalance); })
            .And("my cash account balance should be", 30,
                delegate(int expectedBalance) { Specify.That(cash.Balance).ShouldEqual(expectedBalance); })

Note that in the "Then" fragment of the Scenario, I'm using NSpec to specify the outcomes.  By using NSpec and Behave# together to author business owner stories in to executable code, I'm able to combine both the story/scenario side of BDD with the specification side.

Monday, August 20, 2007

Continuous Integration resources for Team Build

Team Build 2005 did not support Continuous Integration out of the box.  Lots of add-ins and tools have been released since Team Build released to do CI.  Here's an admittedly incomplete list for doing Continuous Integration with Team Build:

I've personally used CruiseControl.NET and the CI Sample in projects, and tested out several of the others.  I'd probably stick with using CruiseControl.NET for now, as it has the easiest access to build status and reports with the dashboard and tray application.  It's also been around for many years and many versions, so it's solid product that can provide some transition to Team Build if your team is already using CC.NET.

Friday, August 17, 2007

Pending scenarios in Behave#

When first authoring stories and scenarios in Behave#, the implementation to support the scenario probably doesn't exist yet.  Part 4 of Nelson Montalvo's series of posts on Behave# illustrates this quite nicely.  The problem is that you don't really want test failures because you haven't implemented a scenario yet.  Rbehave has the ability to create "pending" scenarios, which won't fail if the implementation doesn't exist.

Misleading failures

For example, here's a scenario that doesn't have an implementation yet:

[Test]
public void Withdraw_from_savings_account_pending()
{

    Account savings = null;
    Account cash = null;

    Story transferStory = new Story("Transfer to cash account");

    transferStory
        .AsA("savings account holder")
        .IWant("to transfer money from my savings account")
        .SoThat("I can get cash easily from an ATM");

    transferStory
        .WithScenario("Savings account is in credit")

            .Given("my savings account balance is", -20)
                .And("my cash account balance is", 10)
            .When("I transfer to cash account", 20)
            .Then("my savings account balance should be", -20)
                .And("my cash account balance should be", 10);
}

When I execute this story, I get a failed test and a misleading exception: 

Story: Transfer to cash account

Narrative:
	As a savings account holder
	I want to transfer money from my savings account
	So that I can get cash easily from an ATM

	Scenario 1: Savings account is in credit
TestCase 'BehaveSharp.Specs.AccountSpecs.Withdraw_from_savings_account_pending'
failed: BehaveSharp.ActionMissingException : Action missing for action 'my savings account balance is'.
	C:\dev\BehaveSharp\trunk\src\BehaveSharp.Examples\AccountSpecs.cs(70,0): at BehaveSharp.Specs.AccountSpecs.Withdraw_from_savings_account_pending()

 Not a very helpful message, especially if I'm executing a suite of stories and scenarios.

Pending scenarios

If a scenario has a pending implementation, I can now add a "pending" message to the scenario, so that I don't get any error messages:

[Test]
public void Withdraw_from_savings_account_pending()
{

    Account savings = null;
    Account cash = null;

    Story transferStory = new Story("Transfer to cash account");

    transferStory
        .AsA("savings account holder")
        .IWant("to transfer money from my savings account")
        .SoThat("I can get cash easily from an ATM");

    transferStory
        .WithScenario("Savings account is in credit")
            .Pending("ability to withdraw from accounts")

            .Given("my savings account balance is", -20)
                .And("my cash account balance is", 10)
            .When("I transfer to cash account", 20)
            .Then("my savings account balance should be", -20)
                .And("my cash account balance should be", 10);
}

Note the "Pending" method call after "WithScenario".  Think of "Pending" similar to the "Ignore" attribute in NUnit.  The story is there, the scenario is written, but the code to support the scenario doesn't exist yet.  Here's the output of the execution with "Pending":

Story: Transfer to cash account

Narrative:
	As a savings account holder
	I want to transfer money from my savings account
	So that I can get cash easily from an ATM

	Scenario 1: Savings account is in credit
		Pending: ability to withdraw from accounts

The scenario stops execution of the scenario (just so you can see the pending scenarios more easily), and outputs the pending message.  Without the "Pending" message, you will get an exception and your test will fail.  With "Pending", you can flag your pending scenarios without worrying about breaking the build if you check in.

Tuesday, August 14, 2007

Some pairing tips

One of the stranger things I noticed about pair programming was that I felt I was more efficient working with a pair than working on task alone.  Mostly I felt it was because not only did I have someone to help out when I got slowed down, but interruptions had a bigger impact as there was someone sitting next to me, waiting for the interruption to "go away".

I know it's a shocker, but interruptions can have a huge impact on productivity, as we can't multitask, and too much email slows us down and stresses us out.  In addition to some basic pairing etiquette rules, here are some general pairing tips to help improve your pairing productivity:

  • Close email (both webmail AND Outlook)
  • Close any RSS readers/aggregators
  • Close any browser windows irrelevant to current task (i.e., that eBay item you're sniping)
  • Set any IM clients to "Away" or with a message of "Pairing"
  • Set your phone to "vibrate"

By forcing ourselves to minimize distractions, we also had to set aside parts of the day to deal with the distractions.  It wasn't that we ignored emails and such, it's that we would only allow ourselves to address those distractions outside of pairing sessions.  This lets us get closer to the 5-6 ideal engineering hours per day that we used for capacity calculations.

Thursday, August 9, 2007

Importance of collocation

Jeremy Miller mentioned that Fred George is blogging now, and that he'd be worth reading.  Wow, Jeremy wasn't lying.  One of the first posts really resonated with me, on collocation.  I've now had the opportunity to work in a variety of different office types:

  • Individual offices
  • Shared room, facing walls
  • Shared room, facing each other, away from walls
  • Cubes

It's something I noticed but couldn't quite quantify or measure at the time, but collocation had a drastic positive effect on our team communication.  Basically, if I have to stand up and walk to talk to someone, it's not going to happen very often.  If I don't even need to turn my head to have a conversation, communication happens all the time.

Since most inefficiencies I encounter during work are because of a lack of communication, it follows that we should optimize our environment for communication.

A rough measurement

So how many conversations did I have per day given each office type?  This is based purely on my recollection, so obviously it's skewed and a bit off, but here's what I remember:

Office Type Conversations per day
Individual office 1-3
Shared room, facing walls 20-25
Shared room, facing each other, away from walls 50-60
Cubes 5-10

When it came to the shared rooms, I had to estimate in conversations per hour, because communication happens so often.  When I was pairing, it was a continuous conversation, so I'd have to start measuring in length instead of number.  Even when I wasn't pairing, I would get involved with maybe a dozen conversations per hour.

Communication is also something that training can only take so far.  If the training is working against what the environment naturally encourages, the success rate isn't going to be too high.

Tweaking the environment

When I was in a shared room, facing walls, our setup looked something like this: (apologies for the crude drawing)

We had several issues with this layout:

  • People talked to each other without looking anyone in the eye (losing valuable visual communication)
  • No wall space for whiteboards 
  • People usually only talked to those in their immediate vicinity
  • Those on the left never heard those on the other side of the room

I should point out that our room wasn't this small, it does look a bit cramped in there in my drawings.  To address these issues, we played around with our desks until we arrived at our final layout:

We found several advantages with this layout:

  • Everyone was facing each other
  • Everyone could hear all conversations and jump in if needed
  • Wall space was freed up for whiteboards, to put up status, do some modeling, etc.
  • Whiteboards were visible to everyone in the group
  • We became a more cohesive, trusting team

The biggest impact was the last point, since communication builds trust. With trust in place, we can achieve true shared responsibility, and finger pointing was reduced to almost nil.  If finger pointing did happen, the entire team knew about it and recognized the problem, and the issue would come up and get resolved in our next daily stand-up.

As one of the biggest Agile values is communication, collocation is absolutely essential in building a smooth running Agile team.  Without collocation, the Agile team can become fractured, distant, and slip back in to the siloed, deferred responsibility paradigm that waterfall processes enforce.

Tuesday, August 7, 2007

Addressing some Behave# concerns

So Joe and I have received some initial feedback for Behave#.  Joe's already given a great intro into how to use Behave# and addressed Roy's specific questions.  I thought I'd address some of the common issues regarding Behave#:

  • Using string matching
  • Using anonymous delegates
  • One developer supports it

Using string matching

One common concern I've heard from a couple of sources now is that Behave# uses strings to match behavior for subsequent scenarios.  Something that might not be clear on how we match behavior is that Scenarios are scoped to a Story.

That is, the "context" parameter of a "Given" scenario fragment is only able to be matched against other Scenarios within a single Story.  Here's an example:

[Test]
public void Withdraw_from_savings_account()
{

    Account savings = null;
    Account cash = null;

    Story transferStory = new Story("Transfer to cash account");

    transferStory
        .AsA("savings account holder")
        .IWant("to transfer money from my savings account")
        .SoThat("I can get cash easily from an ATM");

    transferStory
        .WithScenario("Savings account is in credit")
        .Given("my savings account balance is", 100,
               delegate(int accountBalance) { savings = new Account(accountBalance); })
        .And("my cash account balance is", 10,
             delegate(int accountBalance) { cash = new Account(accountBalance); })
        .When("I transfer to cash account", 20,
              delegate(int transferAmount) { savings.TransferTo(cash, transferAmount); })
        .Then("my savings account balance should be", 80,
              delegate(int expectedBalance) { Assert.AreEqual(expectedBalance, savings.Balance); })
        .And("my cash account balance should be", 30,
             delegate(int expectedBalance) { Assert.AreEqual(expectedBalance, cash.Balance); })

        .Given("my savings account balance is", 400)
        .And("my cash account balance is", 100)
        .When("I transfer to cash account", 100)
        .Then("my savings account balance should be", 300)
        .And("my cash account balance should be", 200);

    Story withdrawStory = new Story("Withdraw from savings account");

    withdrawStory
        .AsA("savings account holder")
        .IWant("to withdraw money from my savings account")
        .SoThat("I can pay my bills");

    withdrawStory
        .WithScenario("Savings account is in credit")
        .Given("my savings account balance is", 100); // This entry doesn't have a match!
        


}

In the "withdrawStory", even though the "Given" fragment string matches the "transferStory" "Given" fragments, the behavior will not match up.  That's because Scenarios belong to a Story, and the matching only happens within a given Story.

In DDD terms, the Aggregate Root is the Story, and the child Entities include the Scenarios.  We could match across stories, but that wouldn't adhere to DDD guidelines, and would result in much more complexity.

So matching issues only happen within one Story.  I don't know how many Scenarios you would need to write before running into issues, but I think we could follow some of Joe's suggestions and use some more intelligent matching algorithms.

Using anonymous delegates

Let me be the first to admit that anonymous delegates are clunky, difficult, and just plain ugly.  But keep in mind that Behave# only deals with delegates.  How the consuming test code creates these delegates does not matter to Behave#.  We have several options (asterisk next to C# 3.0 features):

Anonymous methods may not be very prevalent in C# 2.0, but there are still quite a few classes in the DNF that use delegate arguments for method parameters.  I ran the following code against .NET 3.5 assemblies:

var types = from name in assemblyNames
            select Assembly.LoadWithPartialName(name) into a
            from c in a.GetTypes()
            where (c.IsClass || c.IsInterface) && c.IsPublic && !c.IsSubclassOf(typeof(Delegate))
            select c.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static) into methods
            from method in methods
            where method.GetParameters().Any(pi => pi.ParameterType.IsSubclassOf(typeof(Delegate)))
                && !method.Name.StartsWith("add_", StringComparison.OrdinalIgnoreCase)
                && !method.Name.StartsWith("remove_", StringComparison.OrdinalIgnoreCase)
            select new { TypeName = method.DeclaringType.FullName, MethodName = method.Name };

int methodCount = types.Count();
int typeCount = types.GroupBy(t => t.TypeName).Count();

Debug.WriteLine("Method count: " + methodCount.ToString());
Debug.WriteLine("Type count: " + typeCount.ToString());

And I found that there are 1019 methods with delegate parameters spread out over 155 types.  With the System.Linq.Enumerable extension methods in .NET 3.5, methods with delegate parameters will be used much more often.

Only one developer supports it

Well...not exactly true.  There are two developers, Joe and I, so that's a 100% improvement, right? :)

Wrapping it up

I really like Dan North's rbehave.  Behave# closely matches rbehave's usage and intent.  Are a lot of the same issues regarding Behave# also valid for rbehave?  Ruby lends itself well to BDD, especially when combining rbehave and rspec, with the elegance of dynamic typing and language features like blocks.

Behave# is still getting started, and we have some kinks to iron out, but I do think we're on the right track, following in Dan North's footsteps.

Thursday, August 2, 2007

Some Behave# news

Joe already announced this, but it doesn't hurt to get the word out some more, right?  We're going to combine our efforts into Behave#, a behavior-driven design framework.  For more information about our framework, check out Joe's posts here and here, as well as my announcement.

I had already released a pre-alpha version of Behave# before Joe and I decided to combine our projects, but we'll soon be formalizing our efforts with a vision statement.  We may not have angels singing yet, but I hope to get there soon.  If you already have some features in mind that you'd like us to address, we'll be using the issue tracker in CodePlex for determining feature priority (though our votes override all others :) ).  I'm really looking forward to seeing how this all turns out.  Stay tuned!

Wednesday, August 1, 2007

Continuous Integration book now out

I just read from Martin Fowler's Bliki that a new book in his signature series is out, "Continuous Integration: Improving Software Quality and Reducing Risk".  If you're not familiar with Fowler's signature series, check out the books on that list:

That's quite an impressive list of books to be in one series.  Many of these books follow the "Duplex Book" pattern, where the book is split into two sections.  The first is a smaller section designed to be read cover-to-cover.  The next section(s) provide prescriptive guidance that can be read end-to-end, or in bits and pieces as necessary.

As an aside, I've felt that nothing impacts or enables success quite as much as continuous integration.  In my (admittedly limited) experience, CI seems to open the doors to other Agile practices like "Whole Team", unit testing and test-driven development, pair programming, and others.  CI is the low-hanging fruit that solves many obvious and common problems in development, while subtly introducing the team to several core Agile values, like communication and feedback.