Monday, January 14, 2008

Making the permanent move

I'm getting pretty tired of cross-posting, so I'm making the permanent move over to Los Techies.  After all, if I get a free t-shirt:

Then the least I can do is stop dipping my toe in the pool and just jump in.  Cross-posting is for chumps anyway.

You don't need to update your feeds, except the feed now links to Los Techies.  This will be my last post on the blogspot site (sniff, sniff), and while you treated me well blogspot, your name doesn't rhyme with a tasty, tasty beer.

Thursday, January 10, 2008

Converting tests to specs is a bad idea

When I first started experimenting with BDD, all the talk about the shift in language led me to believe that to "do BDD" all I needed to do was to change my "Asserts" to some "Shoulds".  At the root, it looked like all I was really doing was changing the order of my "expected" and "actual".

In my admittedly short experiences so far, I've found that BDD is much more than naming conventions, language, and some macros.  For me, unit testing was about verifying implementations while specifications were about specifying behavior.  Although I'm not supposed to test implementations with unit testing, conventions led me down this path.  It's easy to fall into the trap of testing implementations, given constraints we put on ourselves when writing unit tests.

Starting with tests

Typically, my unit tests looked something like this:

[TestFixture]
public class AccountServiceTests
{
    [Test]
    public void Transfer_WithValidAccounts_TransfersMoneyBetweenAccounts()
    {
        Account source = new Account();
        source.Balance = 100;

        Account destination = new Account();
        destination.Balance = 200;

        AccountService service = new AccountService();

        service.Transfer(source, destination, 50);
        
        Assert.AreEqual(50, source.Balance);
        Assert.AreEqual(250, destination.Balance);
    }
}

When I wrote the implementation test-first, I first got a requirement or specification from the business.  It sounded something like "We want to be able to transfer money between two accounts".

Before I started writing my tests, I had to figure a few things out.  Our naming conventions forced us into a path that made us choose where the behavior was supposed to reside, as we named our test fixtures "<ClassUnderTest>Tests".  In the example above, we're testing the "AccountService" class.  Additionally, our individual tests were named "<MethodName>_<StateUnderTest>_<ExpectedBehavior>".  We took these naming conventions because it organized the class behavior quite nicely according to members.

Converting to BDD syntax

I wanted to try BDD, so the quickest way I saw to do it was to change the names of our tests and switch around our assertions:

[TestFixture]
public class AccountServiceTests
{
    [Test]
    public void Transfer_WithValidAccounts_ShouldTransfersMoneyBetweenAccounts()
    {
        Account source = new Account();
        source.Balance = 100;

        Account destination = new Account();
        destination.Balance = 200;

        AccountService service = new AccountService();

        service.Transfer(source, destination, 50);

        source.Balance.ShouldEqual(50);
        destination.Balance.ShouldEqual(250);
    }
}

Now that I see the word "Should" everywhere, that means I'm doing BDD, right?

Just leave it alone

BDD is much more than naming conventions and the word "should", it's more about starting with a context, then defining behavior outside of any hint of an implementation.  When creating my Transfer test initially, before I could start writing ANY code, because of our naming conventions I had to decide two things:

  • What class does the behavior belong
  • What method name should be assigned to the behavior

But when writing true BDD-style specs, I don't care about the underlying class or method names.  All I care about is the context and specifications, and that's it!  If I was writing the Transfer behavior BDD-first, I might end up with this:

[TestFixture]
public class When_transfering_money_between_two_accounts_with_appropriate_funds
{
    [Test]
    public void Should_reflect_balances_appropriately()
    {
        Account source = new Account();
        source.Balance = 100;

        Account destination = new Account();
        destination.Balance = 200;

        source.TransferTo(destination, 50);

        source.Balance.ShouldEqual(50);
        destination.Balance.ShouldEqual(250);
    }
}

The key difference here is nowhere in the fixture nor the test method name will you find any mention of types, member names, or anything that hints at an implementation.  I'm driven purely by behavior, which led me to a completely different design than my test-first design.  In the future, if I decide to change the underlying implementation, I don't need to do anything with my BDD specs.  When I've decoupled my implementation of behavior completely from the specification of behavior, I can make much more dramatic design changes, as I won't be bound by my tests.

I've also found I don't modify specifications nearly as much as I used to modify tests.  If behavior is changed, I delete the original spec and add a new one.

So don't trick yourself into thinking that you need to modify your tests to become BDD-like.  Just leave those tests alone, they're doing exactly what they're designed to do.  A single context is likely split across many, many test fixtures, so it's just not an exercise worth undertaking.  Start your BDD specs fresh, unencumbered from existing test code, and the transition will be much, much easier.

Stop the Flash insanity

More and more it seems high-profile websites are using Flash as a mechanism to deliver essential content.  In extreme cases, such as mycokerewards, the entire site is built on Flash.  Ads in websites, which you used to be able to ignore, now use Flash to replace the entire screen contents, screaming at you to "GO SEE CLOVERFIELD!!!".

My dev machine is a fairly hefty beast, but it still has a hard time processing Flash-only sites:

I'm not even doing anything in the Flash-only site except that I'm looking at it.  It's doing some ridiculous, pointless animation of bubbles floating around, and that requires 40% of my dual-core machine's resources.  When I look at this site on a single-core machine, I pretty much can't use FireFox any more, as it's completely consumed with those floating bubbles.

Sites that used to be relatively easy to get around are now just annoying, like ESPN.com, which are starting to rely heavily on Flash to deliver actual content.  Please don't start playing some highlights video if I'm just going to your homepage, I really don't like Stuart Scott screaming "BOOYAH" to me through my speakers.

If anything, Flash should be use to complement content, but not be the actual content.  To deal with normal annoyances, I go back and forth between these two FireFox add-ons:

  • Adblock Plus (blocks ad content, but not other Flash content)
  • Flashblock (blocks ALL Flash content, letting you opt-in to anything you want to see)

Flash for delivering ad content is perfectly fine, as long as it's non-intrusive and non-resource intensive.  Flash for delivering site content is just plain heinous, and I hope Santa delivers coal in those perpetrators' stockings next year.

Wednesday, January 9, 2008

More on Scrummerfall

A couple of comments have led me to think that I didn't explain what it is.  Let's review waterfall phases:

  • Requirements specifications
  • Design
  • Implementation
  • Integration
  • Testing
  • Deployment

Each of these has a gated exit, such that to exit one phase, you need to meet certain criteria.  For example, to leave "Design" phase, you have to have your detailed design with estimates signed off by developers, analysts, and the customer.  You cannot enter Implementation until you have finished design.

Scrummerfall still uses a phase-based methodology, but uses iterations for the "Design" and "Implementation" phases.  Testing is done as unit tests during development, but QA is not involved until the actual Testing phase later.

Scrummerfall is easier to introduce in companies heavily invested in waterfall, as it's only one group (developers) that are actually changing how they work.

Scrummerfall also makes two assumptions that become more invalid and costly the larger the projects are:

  • Requirements don't change after Design
  • Integration, Testing, and Deployment are best done at the end

Now just because Agile doesn't have gated phases for these activities doesn't mean they don't happen.  Design and requirements gathering still happen, as do release planning, testing, deployment, etc.  The difference is that all of these activities happen each iteration.

This is very tough in fixed-bid projects, which assume that requirements, cost, and deadline don't change.  There are alternatives to fixed-bid projects, which I won't cover here, that provide the best of both fixed-bid and time-and-materials projects.

With Agile, you don't do a "Testing" iteration and a "Design" iteration.  That's lipstick on a pig, you're still doing waterfall. 

So how do you avoid Scrummerfall if you're trying to introduce Agile into your organization?  The trick is to sell the right ideas to all of the folks involved.  If it's only developers leading Agile adoption, chances are you won't get too far past TDD, continuous integration, pair programming, and the rest of the engineering-specific XP practices.

Get buy-in from an analyst, a PM, a tester, your customer, and your developers.  You won't have to convert all of the analysts and PMs, just the ones working on your project.  Remember, each person needs to see tangible business value from the changes you are proposing.  I tend to target management for Scrum and developers on XP, because although it's easy to get everyone to agree on values and principles, the concrete practices vary widely between the different roles.

Tuesday, January 8, 2008

For the record

This is not Scrum:

  • Planning
    • Release planning
    • Backlog creation
    • Architecture and high-level design
  • Development sprints
    • Design
    • Code
    • Test
  • Conclusion
    • System integration
    • System test
    • Release

This is Scrummerfall, where we still do a phase-based waterfall model, but do iterations during the "development" phase.  I hear the word "hybrid" thrown around a lot in the above model.

This isn't the cool kind of hybrid, like a Prius or a Liger (pretty much my favorite animal).  It's more of the Frankenstein hybrid, where it looks good on paper, but in the end you need the village mob with pitchforks and torches to drive it away.  Afterwards, the villagers have a bad taste in their mouth regarding science, so they banish it instead of booting out the mad scientist.

One of the key benefits of agility is the ability to respond to change through feedback.  If the backlog is set in stone before my "development sprints" start, how do I change the backlog once business requirements and priorities change, which they inevitably will?  There is no mechanism to respond to change in Scrummerfall, dooming your project to quick failure, but now you get to blame Scrum.

Another disaster waiting to happen is waiting until the end to do system integration.  I think everyone learned that "it works on my computer" doesn't fly very far once you start collecting a paycheck.  Customers demand the software works on their machine, not yours.  So why wait until the very end to do the riskiest aspect of development, when cost of failure is at its highest?  It pretty much guarantees failure and some exciting blamestorming meetings.

I think these models come about from those who think Agile and Scrum are just another process, a cog to switch out.  Agile isn't a process, it's a culture, a mindset, a belief system, a set of values, principles, and practices.  Treating it like just another process gives rise to tweaking and leads to hideous hybrid Frankenstein monsters, roaming the countryside and spreading failure.

Monday, January 7, 2008

SVN and proxy servers

So I wanted to check out Jeffrey's sample code he demonstrates in the latest dnrTV episode.  I get a fun message back from SVN, which is completely meaningless to me:

It turns out that although SVN works through port 80, the proxy server I'm behind filters HTTP headers, one of which is PROPFIND.  Google tells me that I can configure SVN to use a proxy server, which is what I need to do with several applications (including Live Writer).  Although I have to hard-code my password into the configuration file, it fixes the problem:

I have to change the configuration file every few months when my password expires, but otherwise I'm good.

NBehave source moved to Google Code

Well, we're fed up with it.  CodePlex is a great home for everything...except source control.  We've decided to keep all things project related (wiki, releases, discussions, etc.) on CodePlex, but move our source to Google Code.

We got the inspiration from the MvcContrib project, which is doing the same thing.  Too many people love their SVN, it turns out.  With Google Code, we can use SVN as our SCC provider instead of TFS.  Pulling down source, submitting patches, and other source-related tasks should get much easier.

Our Google Code project can be found here:

http://nbehave.googlecode.com

And the source can be browsed here:

http://nbehave.googlecode.com/svn/trunk/

Our public CCNET instance has been updated, which can be found here:

http://ccnet.nbehave.org/

It's been a bit of a journey integration NSpec and finding a new home for the source code, and moving to SVN should make all of our lives a little easier.

Thursday, January 3, 2008

Application Root is your friend

It still surprises me how many ASP.NET developers I run into don't know about the different ways to construct path references in ASP.NET.  Let's say we want to include an image in our website.  This image is hosted on our website, in an "img" subfolder off of the application root.  So how do we create the image HTML, and what do we use as the URL?  The wrong answer can lead to big-time maintenance headaches later.

There are three kinds of paths we can use:

  • Absolute
  • Relative
  • Application root (ASP.NET only)

Additionally, we have a few choices on how we chose to create the image in our ASP.NET page:

  • Plain ol' HTML
  • HTML server control
  • Web server control

Each kind of path can be used for each rendering object type (HTML, server control).  It turns out that the path is much more important than the rendering object, as different forces might lend me to use controls over HTML.  For posterity, I'll just pick plain ol' HTML as an example.

Absolute

Absolute paths are fully qualified URL paths that include the domain name in the URL:

<img src="http://localhost/EcommApp/img/blarg.jpg" />

Absolute paths work great for external resources outside of my website, but are poor choices for internal resources.  Typically ASP.NET development is done on a development machine, and deployed to a different machine, which means the URLs will most likely change.

For example, the URL above works on my local machine, but breaks when deployed to the server because the "EcommApp" now resides at the root, so I need a URL like "http://ecommapp.com/img/blarg.jpg".  Since this absolute path is different, my link breaks, and I have to make lots of changes going back and forth between production and development.  For internal resources, absolute paths won't work.

Relative

Relative paths don't specify the domain name, and come in a few flavors:

  • Site-root relative
  • Current page relative
  • Peer relative

These URL path notations are similar to file path notations.  Each is slightly different and carries its own issues.

Site-root relative

Here's the same img tag used before, now with a site-root relative path:

<img src="/EcommApp/img/blarg.jpg" />

Note the lack of the domain name and the leading slash, that's what makes this a site-root relative path.  These paths are resolved against the current domain, which means I can go from "localhost" to "ecommapp.com" with ease.

Again, the problem I run into is that locally, my app is deployed off of an "EcommApp" folder, but on the server, it's deployed at the root.  My image breaks again, so site-root relative paths aren't a great choice, either.

Current page relative

Now the img tag using a current page relative path:

<img src="img/blarg.jpg" />

This time, I don't have the leading slash, nor do I include the "EcommApp" folder.  This is because current page relative paths are constructed off the URL being requested, which in this case is the "default.aspx" page at the root of the application.  The request goes from the "default.aspx" path, wherever that might be.  Now my URL does not have to change when I deploy to production, it works in both places.

But I have two problems now:

  • Moving the page means I have to change all of the resource URLs
  • Creating a page in a subfolder means all URLs to the same resource could be different

This leads me to the last kind of relative path.

Peer relative

Suppose I want to create the img tag in a site with the following structure:

  • \Root
    • \img
      • blarg.jpg
    • \products
      • default.aspx

Note that default.aspx has to go up one node, then down one node to reference the file in the above tree.  Here's the img tag to do just that:

<img src="../img/blarg.jpg" />

Similar to folder paths, I use the ".." operator to climb up one node in the path, then specify the rest of the path.  This path works just fine in production and development, but I still have two main problems:

  • URLs to the same resource are different depending on depth of the source file in the tree
  • Moving a resource forces me to manually fix the relative paths

If I decide to move the "default.aspx" page up one level, all of the relative paths must be manually fixed.

But there's one more major issue.

User controls

Now let's suppose I have the following setup:

  • \Root
    • \img
      • blarg.jpg
    • \products
      • default.aspx
    • \user
      • login.aspx
    • \support
      • help.aspx
    • \usercontrols
      • header.ascx
    • default.aspx

All of the ASPX files use the same "header.ascx" control (I'm not using master pages on this site).  The "header.ascx" control needs to reference the img, but note that the relative path is calculated based on the page requested, not the user control requested.  This means that the relative URL will only work if the user control just happens to be included in a page at the correct depth.  All other times it will break, and this is a huge problem.

Luckily, ASP.NET includes a handy way to fix all of these problems, deployment and otherwise.

Application root

A path built with an application root is prefixed with a tilde (~).  For example, here's a raw application path to the image:

~/img/blarg.jpg

Note the "~/" at the front, that's what signifies it as an application root path.  Application root paths are constructed starting at the root of the application.  For example, both "http://ecommapp.com" and "http://localhost/EcommApp" are application roots, so I don't have to worry about changing paths at deployment.

Additionally, I don't have to worry about problems with node depth in the hierarchy, as paths are formed from the root and not relative to a leaf node, so my user control problem disappears.

One issue with application root paths is only ASP.NET knows about them.  Not browsers.  If I do this:

<img src="~/img/blarg.jpg" />

The image breaks, as browsers don't know what IIS applications are, they just know URLs.  ASP.NET, however, will take this URL and generate the correct relative path for you, as long as I use ASP.NET to generate the path.  Server controls, like "asp:hyperlink" can handle the application root path.

To use the application root path in raw HTML, I just need to use the ResolveUrl method, which is included in the Control class, and therefore available in both my Page and UserControl classes.  Combining raw HTML and the ResolveUrl method, I get:

<img src="<%= ResolveUrl("~/img/blarg.jpg") %>" />

The "<%= %>" construct is basically a "Response.Write", and allows me to call the ResolveUrl method directly.

Using application root paths allows me to:

  • Develop locally and deploy to production seamlessly
  • Have consistent URL per resource
  • Use raw HTML without the problems of absolute and relative paths

Some caveats

No matter what I do, I have to change code if either the resource or the page moves.  I can minimize the number of changes by externalizing the specific path (the "~/img/blarg.jpg" part) to a resource file, constant, or static global variable.  This applies for all types of paths, so I like to eliminate as much duplication as possible.

It's dangerous to assume that the structure or names of resources and pages won't change at some point.  As a web site grows, it can become necessary to move resources around and reorganize your site structure.  To minimize the impact of deployment and change, use application paths as much as possible, you'll save on Excedrin later.

Time expired, still going strong

My Live Writer beta expired at the beginning of the year, but I'm still going strong:

I don't see the "Ask Me.....Never" button, so I'll just click the Later button.  It's still less work to click a button once a day than it is jumping through the hoops to get Live Writer installed on a Server 2003 machine.  I still love the LW, so I'll keep clicking away.

Wednesday, January 2, 2008

Targeting multiple environments through NAnt

One of the nice things about using a command-line local build is that I can easily target multiple environments.  Our configuration scheme is fairly straightforward, with all changes limited to one "web.config" file.

When I refer to multiple environments, I'm talking about many individual isolated deployment targets, such as production, integration, developer, local, etc.  Each environment has its own database, services, maybe even domain.  Sometimes I need to configure my local code to point to different environments, where maybe a defect shows up in production but not our integration environment.

A typical scenario might be that I have different database in each environment.  Different databases means different connection strings, and my connection strings are stored in my "web.config" file.  The problem is that the "web.config" file is stored in source control, and I don't want to always check-in and check-out the file each time I want to target a different environment.

Additionally, I don't want to have to remember the connection string when I switch to a different environment.  I want it all automated, and I want it to just work.

To point our local codebase at different environments, we apply a few tricks to NAnt to make it easy to switch back and forth between many environments.

The command-line build

The first item we have set up is a command-line build and local deployment.  Our environment is a too complex to have only solution compilation to be sufficient to actually run our app, so we use NAnt to build and run our software.  To do this, I have a very simple "go.bat" batch file that calls NAnt with the appropriate command-line arguments:

@tools\nant\NAnt.exe -buildfile:NBehave.build %*

When I call NAnt from the command-line, I can pass in multiple targets without needing to specify the build file or other arguments every time:

go clean test deploy

Now that I can easily call different targets in the build, I can use that mechanism to target different environments, doing something like this:

go PROD clean test deploy

Configuring NAnt

To get a NAnt target to change my configuration, I need a few elements in place:

  • File to hold configuration entries
  • Target to load configuration
  • Tasks to apply configuration

The basic idea is that the "PROD" or "SIT" or "DEV" target will load up specific configuration properties.  After compilation, these configuration properties will be inserted back into the web.config file.  I will have a set of configuration properties for each environment that have the same name, but different values.

Configuration settings file

I like to keep my configuration settings in a separate build file, so I created an "environmentSettings.build" file to hold all of the settings for each environment:

<?xml version="1.0" encoding="utf-8"?>
<project name="Environment Settings" xmlns="http://nant.sf.net/schemas/nant.xsd">

  <target name="config-settings-PROD">
    <property name="connection_string" value="Data Source=prddbsvr;Initial Catalog=AdventureWorks;Integrated Security=true" />
  </target>

  <target name="config-settings-SIT">
    <property name="connection_string" value="Data Source=sitdbsvr;Initial Catalog=AdventureWorks;Integrated Security=true" />
  </target>

  <target name="config-settings-DEV">
    <property name="connection_string" value="Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=true" />
  </target>

</project>

Two important items to note are:

  • Target names differ only by last part, the target environment
  • Targets all define the same property, namely "connection_string", but these values are different in each example

Selecting configuration

Now that my configuration settings file is finished, it's time to turn our attention back to the main build script file.  I need to add targets to handle "PROD", "SIT", etc.  Additionally, I want to define a property that has a default environment setting.

The targets that handle "PROD", etc. don't need to do much other than re-define the environment setting property and load the targets from the new file.  Here are those targets:

<property name="target-env" value="DEV" />

<target name="DEV">
  <property name="target-env" value="DEV" />
  <call target="load-config-settings" />
</target>

<target name="SIT">
  <property name="target-env" value="SIT" />
  <call target="load-config-settings" />
</target>

<target name="PROD">
  <property name="target-env" value="PROD" />
  <call target="load-config-settings" />
</target>

<target name="load-config-settings" unless="${target::has-executed('load-config-settings')}">
  <include buildfile="${env-settings.file}" />
</target>

The first thing to note here is the declaration of the "target-env" property at the top.  That will be useful later on when making decisions based on the target environment.

Next, I declare a set of targets named after my target environments, namely "DEV", "SIT" and "PROD".  These are also the same names as the postfixes in the target names in my "environmentSettings.build" file I created earlier.  In each of these targets, I override the "target-env" property with its new value, the target environment.  Remember that in my "go.bat" file, all command-line arguments are targets to be executed by NAnt, so I have to create a specific target for each target environment I want to support.

Finally, I call the "load-config-settings" target.  Its responsibility is simply to load the environment settings build file I created earlier, but not to call any of its targets.  The reason for the "unless" part is that NAnt does not allow you to declare the same targets twice, so I need to make sure that the "load-config-settings" target is only executed at most once.

Loading and applying configuration

Now that I have all of the targets loaded, I need to call the appropriate settings target and apply the configuration properties to the web.config file.  This step is usually done post-compilation, but I can apply the settings any time after they are loaded:

<target name="modify-web-config">
  
  <call target="config-settings-${target-env}" />

  <xmlpoke
    file="${deploy.dir}/Web.Config"
    xpath="/configuration/appSettings/add[@key='ConnectionString']/@value"
    value="${connection_string}"
   />

</target>

First, this target calls "config-settings-XXXXX", where the last part is filled in by the "target-env" property declared earlier.  If I chose "SIT", the "config-settings-SIT" target is called.  If I chose "PROD", the "config-settings-PROD" target is called.  Recall also that the "config-settings-XXXX" targets all declare the same properties, but with different values.

Finally, I use the xmlpoke task to modify the web.config file, giving it the new "connection_string" property value set up from the "config-settings-XXXX" target.

Now, if I want to target different environments, all I need to do is put in the environment name when calling the batch script, such as "go SIT deploy-local", and my local app now targets a different environment.  If there are more complex things I need to do based on the target environment, all I need to do is check the "target-env" property.

Wrapping it up

There are many different ways to target different environments, such as web deployment projects and solution configuration.  I found using NAnt integrated well with our command-line build and gave us a maintainable solution, as all build/deployment logic is hosted in one build script, instead of spread over many project or solution configurations.