Friday, September 28, 2007

Values over vendors

Many times when I start explaining Scrum or Agile for the first time, one of the first questions I get asked is "what tool do I use for <project function>"?  I fully understand needing to standardize on a platform such as .NET, Java, RoR, etc.  But how you deliver shouldn't be nearly as important to the business as what you deliver.

One of the most important aspects of true Agile teams is a self-organizing team, which determines how best to deliver and develop what the business needs.  If the business can't trust the team to choose how, how in the world can the business trust them to deliver anything at all?

A self-organizing team cultivates a set values, principles, and practices to deliver software.  The values come first, and everything else is built on top of that.  If you start with tools and try to fit values around the tools, you'll find the team is only going to care about fulfilling the corporate policy and much less about the value behind the tool.  It's far easier to build policies on values than to derive values from policies, as policies built without values will seem arbitrary and pointless to the team.  If there is a need for a corporate policy, involve as many of the stakeholders as possible into those decisions.  I've seen several teams become disinterested and disheartened when a tool is forced upon them without their input or approval.

So instead of forcing the organization to use a certain build, bug tracking, or requirements gathering technology because of political or vendor-lock-in reasons, cultivate a set of values like "Simplicity", "Feedback", "Quality", etc.  Values define principles and practices, practices reinforce values, and practices can be followed through tools.  By focusing on the values instead of tools, we can broaden our scope of practices to reinforce our values.  Tools can enforce practices, but tools can never define values.  If we focus on tools, our value system becomes warped and twisted towards the tool vendor's values (selling their tool) or those pushing the tool internally (political, ego, or personal gain).

Wednesday, September 26, 2007

Driving and refining Ubiquitous Language

Scott has a nice succinct description of how Ubiquitous Language is defined.  In the original description, Eric defines Ubiquitous Language as:

A language structured around the domain model and used by all team members to connect all the activities of the team with the software.

But unless the model is created through user stories with a domain expert, the domain model doesn't necessarily represent what true concepts in the domain, but rather concepts through the emerald-colored glasses of the developer.  Language drives the model, the model can represent the language, but the model never drives the language unless through user stories.  If you find that the domain experts are using language that only exists because it exists in the software, you've made a wrong turn somewhere.

It can be easy to fall into the trap where the domain expert becomes so familiar with the software and the model, that they start defining their language from the software.  That's where the skills of the developer and modeler become even more important.  The developer needs to recognize both gaps in the language and compromises by the domain expert to fit their ideas into what they already see in the system.

So how do we make sure the language doesn't get corrupted by the model or the system?  A few things need to be in place in your team:

  • Honesty
  • Trust
  • Communication

The domain expert needs to be honest about irregularities, the developers need to trust the domain expert to be honest, and lots of communication needs to happen to build honesty and trust.

Organizing with Solution Folders

I don't know how long it's been around, but I found a nice organization feature for VS Solutions, Solution Folders.  Solution Folders are logical (not physical) folders that you can use to organize projects and solution items.  You can nest folders as deep as you like also, just right-click the solution in the Solution Explorer and click "Add->New Solution Folder".

It's easy to get carried away, but I like to organize my projects by app, when I have over 10 or so projects in my solution.  Typically, it will look like this:

  • Common Projects
    • Tests
  • Windows App
    • Tests
  • Web App
    • Tests
  • Build

Here's an example with NBehave:

This solution was small enough that it really didn't need it, but with a solution with several dozen projects and many applications, solution folders can prevent quite a few headaches.

Monday, September 24, 2007

SharePoint 2007 Wiki - not a fan

Now that I've written a couple large-ish wiki entries on our team's SharePoint 2007 wiki, I can reasonably say I'm not too impressed with the wiki offerings from MOSS 2007.  A few complaints so far:

  • No apparent wiki markup language
  • No documentation, other than one stock page that comes with the wiki
  • RSS feed for wiki only covers new items, not modifications to existing items
  • Only two editing options, WYSIWYG or straight-up HTML
  • WYSIWYG editor not very efficient and produces ugly, non-compliant, deprecated HTML
  • No auto-linking, back-linking, free-linking, etc.

Basically, most of the features I had grown to love in FlexWiki are not present.  My biggest beef is probably the lack of a wiki markup language.  The HTML output by the WYSIWYG is pretty terrible, as it's mostly deprecated HTML tags like FONT.  The whole point of a wiki markup language is to make it easy for non-technical folks to add entries.  When using WYSIWYG, styles become corrupted quite fast, as fonts and such are managed at the HTML level.

For example, let's say you want to have the following entry in a wiki:

Current Build Architecture

Local Builds

  • Solution-driven builds
  • IIS vdirs and web site created manually
  • Packaging steps done manually through a C++ post-build events project
  • Environment configuration done manually

Server Builds

  • Project-driven builds
  • MSI deployment
  • Custom scheduler service for daily and deployment builds
  • Uses a NAnt and an MSBuild build script file
  • Build scripts manually deployed to build server
  • Build scripts create workspaces, get sources, compile, create MSI's, and deploy

In MOSS 2007 Wiki, the above structure is possible, but it took a lot of cajoling with the WYSIWIG editor to get it right.  I expected the header text to use "Hxxx" html tags, and the HTML produced to look reasonable, so I could fine-tune it.  Instead, this is what I got:

<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2></FONT>&nbsp;</DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=3><STRONG>Current Architecture</STRONG></FONT></DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2></FONT>&nbsp;</DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><STRONG><FONT size=2>Local Builds</FONT></STRONG></DIV>
<UL>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Solution-driven builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>IIS vdirs and web site created manually</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Packaging steps done manually through a C++ post-build events project</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Environment configuration done manually (i.e., SiteInfo guids)</FONT></DIV></LI></UL>
<P class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2><STRONG>Server Builds</STRONG></FONT></P>
<UL>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Project-driven builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>MSI deployment</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Dell Scheduler for daily and deployment builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Uses a NAnt and an MSBuild build script file</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Build scripts manually deployed to build server</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Build scripts create workspaces, get sources, compile, create MSI's, and deploy</FONT></DIV></LI></UL>

This is not a joke.  Non-XTHML compliant markup in a product released in 2006 is unacceptable at this point.  Using deprecated HTML tags like "FONT" is even less acceptable, almost laughable.  I can't even read this markup, it's giving me a headache.

Here's the same markup in FlexWiki:

!Current Architecture

!!Local Builds
	* Solution-driven builds
	* IIS vdirs and web site created manually
	* Packaging steps done manually through a C++ post-build events project
	* Environment configuration done manually (i.e., SiteInfo guids)
	* Server Builds

!!Project-driven builds
	* MSI deployment
	* Dell Scheduler for daily and deployment builds
	* Uses a NAnt and an MSBuild build script file
	* Build scripts manually deployed to build server
	* Build scripts create workspaces, get sources, compile, create MSI's, and deploy

Now which markup is more maintainable?  Which one is easier to read?  Which one is easier to understand, edit, and change?

FlexWiki parses the markup to output HTML, and FlexWiki users don't have to worry about the HTML, only simple formatting rules.  MOSS 2007 wiki is a good first step in a wiki engine for SharePoint, but it's only a first step.  Be aware that its features pale in comparison to the more mature wiki engines, which have been around for many years and many versions.

Wizards and designers are useless to me if the code/markup they generate is not maintainable.  Also, why is it that tool consolidation means I have to give up a host of features?  Seems that instead of doing a few things well, MOSS 2007 does two dozen things not so well.  I'd rather shoot for integration over consolidation and let individual tools shine.  Although our CMS/blog/wiki tools are now consolidated on our team/org, I'm not entirely sure what exactly it bought us to lose our superior wiki and blog engines we used previously.

Friday, September 21, 2007

New favorite deployment method

Having run into so many crazy errors from installing MSI's, I've settled on a new favorite deployment mechanism: NAnt (or MSBuild).  Here's how it works:

  • Package application and build scripts into one ZIP file
  • Package any tools needed for the build (NAnt, NUnit, etc.)
  • Package a bootstrapper batch file that calls NAnt and your build file with the appropriate targets
  • Run the batch file

Pros

  • Install script can be the same script as run on a local dev box
  • Can package NAnt along with application, so no need to install any tools on their machine
  • More robust than XCopy deployment
  • Far fewer headaches than with MSI deployment
  • No need for proprietary scripting languages to do complex work, instead use open-source, standard tasks
    • No more Wise script or InstallScript

Cons

  • Not for commercial products
  • MSI's do take care of a lot of boilerplate tasks
  • MSI's have rollback and uninstall built in
    • Though with custom actions, still a pain
  • Nice UI
  • Familiar interface

Here's what my final ZIP file looks like:

  • Product-Dist.zip
    • Product.zip (zipped up application)
    • Tools
      • NAnt
      • NUnit
    • install.build
    • Install.bat
    • Uninstall.bat
    • Go.bat

What's really cool is that the structure of the distribution zip file matches my local structure, so installing locally is the exact same as installing on a clean box.  Also, I can deploy NUnit and the tests, so I can run tests on deployed machines for some extra build verification.

Tuesday, September 18, 2007

Agile cheat sheet

I already have the ReSharper cheat sheet and a Smells to Refactorings quick reference.  For those who have every cheat sheet taped to their walls, Dave Laribee created a nice Agile cheat sheet, that includes both the manifesto and the principles:

Agile cheat sheet

Too often introductions and conversations about Agile jump straight to specific processes and tools, so it's nice to have a nice little reminder of our team's goals visible to whoever walks by.  It also serves as a good introduction to anyone that stops by that's not familiar what Agile is all about, or has only heard of it through processes like Scrum or XP.  I just love the visible information these cheat sheets provide...

Monday, September 17, 2007

Weird CruiseControl.NET error

So I'm setting up CC.NET for it seems like the hundredth time, and out of the box I get a weird error from the Dashboard.  CC.NET has always been a smooth install and configuration process for me, so I was a little taken aback when I got the following error when trying to look at a build report:

The system cannot find the file specified.

That's all I see in the HTML, nothing else.  I double-checked the Virtual Directory settings, and it matches what CC.NET documentation recommends.  The build log the page is pointing at definitely exists.  I know it's something small (it always is).  Has anyone seen this error in the CC.NET Dashboard build report page before? 

Monday, September 10, 2007

Smells to refactorings quick reference guide

Two books I like to have close by are Martin Fowler's "Refactoring: Improving the Design of Existing Code" and Joshua Kerievsky's "Refactoring to Patterns".  It's not that productive to memorize all of the refactorings, and especially the patterns.  Applying patterns and refactorings before they are needed can make your code unnecessarily complex and less maintainable in the long run.  No one likes running into an Abstract Factory with only one concrete factory implementation. 

I need a context for which to apply patterns and refactorings, and code smells provide the best indication for those contexts.  All I really need to do is get good at recognizing smells, and then it becomes a cinch to look up possible refactorings.  I made a nice quick reference guide help me match smells to refactorings:

Smells to Refactorings Quick Reference Guide

It's adapted from this guide, but I had a tough time reading that one when it was pasted on the wall next to me.  My version spans 3 pages in a larger font and a landscape layout, and I've had no issues reading it on the wall next to me.

The list of smells is much smaller and easier to remember than the patterns and the refactorings.  By recognizing smells instead of using patterns when they're not necessary, I can apply an evolutionary, JIT (just-in-time) approach to my design and architecture.  Instead of planning patterns beforehand, I can have confidence that I can recognize where they're needed, and apply them as necessary, exactly when they're necessary, at the last responsible moment.

Wednesday, September 5, 2007

Short path to failure

In three easy steps:

  1. Separate those making decisions from those affected by the decisions
  2. Remove accountability from the decision makers for the decisions made
  3. Rinse, repeat

After going to the inaugural Agile Austin group meeting last night, I'm convinced more than ever that a siloed organizational structure forces the steps listed above.

Tuesday, September 4, 2007

Authoring stories with NBehave 0.3

As Joe already mentioned, Behave# has merged with NBehave.  The merged NBehave will still be hosted on CodePlex, and the old project site will redirect to the new CodePlex NBehave site.  With the announcement of the merge comes the first release of our merging efforts (0.3), which you can find here.

The new release added quite a few features over the previous release, including:

  • Pending scenarios
  • Console runner
    • Decorate your tests with "Theme" and "Story" attributes
    • Scenario result totals
    • Dry run output

The major addition is the console runner feature.  One problem we always ran into was that as developers, we wanted to get pass/fail totals based on scenarios (not tests) so that we could have more meaningful totals that matched our original stories.  A single story could have several scenarios, but would only report as one "Pass" or "Fail" to NUnit.  Additionally, this is the first release that compares fairly evenly with the first release of rbehave.

So how can we use NBehave to accomplish this?  As always, we'll start with the story.

The Story

I've received a few comments for a different example than the "Account" example.  I'll just pull an example from Jimmy Nilsson's excellent Applying Domain-Driven Design and Patterns book (from page 118):

List customers by applying a flexible and complex filter.

Not story-friendly, but a description below lets me create a meaningful story:

As a customer support staff
I want to search for customers in a very flexible manner
So that I can find a customer record and provide them meaningful support.

I'm playing both developer and business owner, but normally these stories would be written by the customer.  Otherwise, so far so good.  What about a scenario for this story?  Looking further into the book, I found a suitable scenario on page 122.  Reworded in the BDD syntax, I arrive at:

Scenario: Find by name

Given a set of valid customers
When I ask for an existing name
Then the correct customer is found and returned.

Right now, I only care about finding by name.  It could be argued that the original story is too broad, but it will suffice for this example.  I'm confident those conversations will take place during iteration planning meetings in any case.

Now that we have a story and a scenario, I can author the story in NBehave.  First, I'll need to set up my environment to use NBehave.

Setting up the environment

I use a fairly common source tree for most projects:

All dependencies (i.e. assemblies in the References of my project) go into the "lib" folder.  All tools, like NAnt, NUnit, etc. that are used as part of the build go into the "tools" folder.  For NBehave, I've copied the "NBehave.Framework.dll" into the "lib" folder, and the entire NBehave release goes into its own folder in the "tools" folder.  For more information about this setup, check out Tree Surgeon.

Now that I have NBehave in my project, I'm ready to write some NBehave stories and scenarios.

The initial scenario

Before I get started authoring the story and scenario, I need to create a project for these scenarios.  If I have a project named MyProject, its scenarios will be in a MyProject.Scenarios project.  Likewise, its specifications will be in a MyProject.Specifications project.  You can combine the stories and specifications into one project if you like.  Finally, I create a class that will contain all of the stories in my "customer search" theme.

I don't name the class after the class it might be testing, instead I name it after the theme.  The reason is that the implementation of the stories and scenarios can (and will) change independently of the story and scenario definition.  Stories and scenarios shouldn't be tied to implementation details.

After adding a reference to NBehave and NUnit from the "lib" folder, Here's what my solution tree looks like at this point:

Note that I named my file after the theme, not the class I'm likely to test (CustomerRepository).  Here's my entire story file:

using NBehave.Framework;

namespace NBehaveExample.Core.Specifications
{
    [Theme("Customer search")]
    public class CustomerSearchSpecs
    {
        [Story]
        public void Should_find_customers_by_name_when_name_matches()
        {
            Story story = new Story("List customers by name");

            story.AsA("customer support staff")
                .IWant("to search for customers in a very flexible manner")
                .SoThat("I can find a customer record and provide meaningful support");

            story.WithScenario("Find by name")
                .Pending("Search implementation")

                .Given("a set of valid customers")
                .When("I ask for an existing name")
                .Then("the correct customer is found and returned");
        }
    }
}

A few things to note here:

  • Theme classes are decorated with the Theme attribute
    • Themes have a mandatory title
  • Story methods are decorated with the Story attribute
  • The initial story is marked Pending, with an included reason

The attributes are identical in function to the "TestFixture" and "Test" attributes of NUnit, where they inform NBehave that this class is a Theme class and it contains Story methods.  NBehave finds classes marked with the Theme attribute, and executes methods marked with the Story attribute.

Now that we have a skeleton story definition in place, I can run the stories as part of my build

Using the console runner

New in NBehave 0.3 is the console runner, which runs the Themes and Stories and collects metrics from those runs.  To run the above stories, I use the following command:

NBehave-Console.exe NBehaveExample.Core.Scenarios.dll

From the console runner, I get the following output:

NBehave version 0.3.0.0
Copyright (C) 2007 Jimmy Bogard.
All Rights Reserved.

Runtime Environment -
   OS Version: Microsoft Windows NT 5.2.3790 Service Pack 1
  CLR Version: 2.0.50727.1378

P
Scenarios run: 1, Failures: 0, Pending: 1

Pending:
1) List customers by name (Find by name): Search implementation

I only have one scenario thus far, but NBehave tells me several things so far:

  • Dot results
    • A series of one character results shows me I have one scenario pending (similar to the dots NUnit outputs)
  • Result totals
    • Includes total scenarios run, number of failures, and number of pending scenarios
  • Individual summary result
    • List of failing and pending scenarios
    • Name of story, scenario, and pending/failing reason

Let's say I want a dry-run of the scenario output for documentation purposes:

NBehave-Console.exe NBehaveExample.Core.Scenarios.dll /dryRun /storyOutput:stories.txt

I've set two switches for the console runner, one to do a dry run and one to have a file where the stories will be output.  Story output can be turned on regardless if I'm doing a dry run or not.  Here's the contents of "stories.txt" after I run the statement above:

Theme: Customer search

	Story: List customers by name
	
	Narrative:
		As a customer support staff
		I want to search for customers in a very flexible manner
		So that I can find a customer record and provide meaningful support
	
		Scenario 1: Find by name
			Pending: Search implementation
			Given a set of valid customers
			When I ask for an existing name
			Then the correct customer is found and returned

This output provides a nice, human-readable format describing the stories that make up my system.

Now that I have a story, let's make the story pass, using TDD with Red-Green-Refactor.

Make it fail

First, I'll add just enough to my story implementation to make it compile:

[Story]
public void Should_find_customers_by_name_when_name_matches()
{
    Story story = new Story("List customers by name");

    story.AsA("customer support staff")
        .IWant("to search for customers in a very flexible manner")
        .SoThat("I can find a customer record and provide meaningful support");

    CustomerRepository repo = null;
    Customer customer = null;

    story.WithScenario("Find by name")
        .Given("a set of valid customers",
            delegate { repo = CreateDummyRepo(); })
        .When("I ask for an existing name", "Joe Schmoe",
            delegate(string name) { customer = repo.FindByName(name); })
        .Then("the correct customer is found and returned",
            delegate { Assert.That(customer.Name, Is.EqualTo("Joe Schmoe")); });
}

All I've done is removed the Pending call on the scenario and added the correct actions for the Given, When, and Then fragments.  The "CreateDummyRepo" method is just a helper method to set up a CustomerRepository:

private CustomerRepository CreateDummyRepo()
{
    Customer joe = new Customer();
    joe.CustomerNumber = 1;
    joe.Name = "Joe Schmoe";

    Customer bob = new Customer();
    bob.CustomerNumber = 1;
    bob.Name = "Bob Schmoe";

    CustomerRepository repo = new CustomerRepository(new Customer[] { joe, bob });

    return repo;
}

I compile successfully and run NBehave, and get a failure as expected:

F
Scenarios run: 1, Failures: 1, Pending: 0

Failures:
1) List customers by name (Find by name) FAILED
  System.NullReferenceException : Object reference not set to an instance of an object.
   at NBehaveExample.Core.Specifications.CustomerSearchSpecs.<>c__DisplayClass3.b__2() in C:\dev\NBehaveExample\src\NBehaveExample.Core.Specifications\CustomerSearchSpecs.cs:line 28
   at NBehave.Framework.Story.<>c__DisplayClass1.b__0()
   at NBehave.Framework.Story.InvokeActionBase(String type, String message, Object originalAction, Action actionCallback, String outputMessage, Object[] messageParameters)

Now that I've made it fail, let's calibrate the test and put only enough code in the FindByName method to make the test pass.

Make it pass

To make the test pass, I'll just return a hard-coded Customer object:

public Customer FindByName(string name)
{
    Customer customer = new Customer();
    customer.Name = "Joe Schmoe";
    return customer;
}

NBehave now tells me that I have 1 scenario run with 0 failures:

.
Scenarios run: 1, Failures: 0, Pending: 0

The dot signifies a passing scenario.  Now I can make the code correct and put in some implementation.

Make it right

Since CustomerRepository is just sample code for now, it only uses a List<Customer> for its backing store.  Searching isn't that difficult as I'm not involving the database at this time:

public Customer FindByName(string name)
{
    return _customers.Find(delegate(Customer customer) { return customer.Name == name; });
}

With this implementation in place, NBehave tells me I'm still green:

.
Scenarios run: 1, Failures: 0, Pending: 0

I can now move on to the next scenario.  If I had additional specifications for CustomerRepository not captured in the story, I can go to my Specifications project to detail them there.

Where we're going

With NBehave's console runner, I can now easily include NBehave as part of my build.  I'm not piggy-backing NUnit for executing and reporting tests, as I'm writing Stories and Scenarios, not Tests.  This option is still available to me and I can create stories inside of tests, so we're not forcing anyone to use the Theme and Story attributes if they don't want to.

It's a good start, but there are a few things still lacking:

  • Integration with testing/specification frameworks
    • The story for authoring the "Then" part of the scenario still isn't that great
  • Features targeted for automated builds/CI
    • XML output
    • A nice XSLT formatter for XML output
    • HTML output of stories in addition to raw text
    • Integration with CC.NET
  • Setup/Teardown for stories/themes, with appropriate BDD names
    • Not sure, since having everything encapsulated in the story can direct my API better

Happy coding!