Thursday, November 29, 2007

Hall of shame

We keep a "Hall of Shame" of WTF-level code snippets to remind us that however bad we might think things get, it could always be worse.  It also serves as a reminder to us that we can't be complacent with ignorance, and lack of strong technical leadership in critical applications can have far-reaching negative consequences.  Here are a few fun examples.

Runtime obsolescence

I've seen this when we have members that we don't want anyone to use anymore.  Unfortunately, it didn't quite come out right:

public static bool Authenticate(string username, string password)
    throw new Exception("I'm obsolete, don't call me, bro!");

Nobody knows not to call this method until their codebase actually executes this code at runtime, which could be at any phase in development.  If it's an obscure method, it might not get called at all until production deployment if test coverage is low.

We want to communicate to developers at compile-time that members are obsolete, and we can do this we the ObsoleteAttribute:

[Obsolete("Please use the SessionContext class instead.", true)]
public static bool Authenticate(string username, string password)

Both the IDE and compilers support this attribute to give feedback to developers when a member should no longer be used.

Exception paranoia

I've seen exception paranoia mostly when developers don't have a thorough understanding of the .NET Framework.  A try...catch block is placed around code that can absolutely never fail, but juuuuuust in case it does:

public static MyIdentity Ident
        if (((HttpContext.Current != null) && (HttpContext.Current.User != null)) && (HttpContext.Current.User.Identity != null))
                return (HttpContext.Current.User.Identity as MyIdentity);
            catch (Exception exception)
                return null;
        return null;

This is also another case of TCPS.  The "if" block already takes care of any failures of the "try" block, and as we're using the "as" operator, the downcasting can never fail either.  The 13 lines of code above were reduced to only one:

public static MyIdentity Ident
        return HttpContext.Current.User.Identity as MyIdentity;

If HttpContext.Current is null, I'd rather throw the exception, as my ASP.NET app probably won't work without ASP.NET up and running.

Clever debug switches

This is probably my favorite Hall of Shame example.  So much so that we named an award at my previous employer after the pattern.  Sometimes I like to insert a block of code for debugging purposes.  If I want my debug statements to live on in source control, I have a variety of means to do so, including compiler directives, the ConditionalAttribute, or even source control to look at history.  A not-so-clever approach we found was:

public void SubmitForm()
    if (false)
        MessageBox.Show("Submit button was clicked.");

To make the code run, you change the block to "if (true)".  This had us cracking up so much we named our Hall of Shame award after it, calling it the "If False Award".  Whenever we found especially heinous or corner-cutting code, they deserved the "If False Award".  We never awarded ignorance, only laziness.

Opportunities to improve

It's easy to let a "Hall of Shame" create negative and unconstructive views towards the codebase.  We always used it as a teaching tool to show that we would fix ignorance, but we would not tolerate laziness.

Every developer starts their career writing bad code.  Unfortunately, some never stop.

Intention-concealing interfaces: blob parameters

When someone is using your code, you want your code to be as explicit and easy to understand as possible.  This is achieved through Intention-Revealing Interfaces.  Evans describes the problems of opaque and misleading interfaces in Domain-Driven Design:

If a developer must consider the implementation of a component in order to use it, the value of encapsulation is lost.  If someone other than the original developer must infer the purpose of an object or operation based on its implementation, that new developer may infer a purpose that the operation or class fulfills only by chance.  If that was not the intent, the code may work for the moment, but the conceptual basis of the design will have been corrupted, and the two developers will be working at cross-purposes.

His recommendation is to create Intention-Revealing Interfaces:

Name classes and operations to describe their effect and purpose, without reference to the means by which they do what they promise.  This relieves the client developer of the need to understand the internals.  The name should conform to the Ubiquitous Language so that the team members can quickly infer their meaning.  Write a test for a behavior before creating it, to force your thinking into client developer mode.

Several times now I've run into blob parameters (I'm sure there are other names for them).  Blob parameters are amorphous, mystery parameters used in constructors and methods, usually of type ArrayList, HashTable, object[], or even params object[].  Here's an example:

public class OrderProcessor
    public void Process(object[] args)
        Order order = (Order) args[0];
        Customer customer = (Customer) args[1];

        // etc.

The intentions behind this code are good, but the results can lead to some frustrating results.  Furthermore, this pattern leads to "Intention-Concealing Interfaces", which is style completely opposite from the Intention-Revealing style Eric proposes.  Clients need to know the intimate details of the internal implementation of the component in order to use the blob parameters.

Wrong kind of extensibility

Blob parameters lead to the worst kind of coupling, beyond just "one concrete class using another concrete class".  When using blob parameters, the client gets coupled to the internal implementation of unwinding the blob.  Additionally, new client code must have intimate knowledge of the details, as the correct order of the parameters is not exposed by the blob method's interface.

When a developer needs to call the "Process" method above, they are forced to look at the internal implementation.  Hopefully they have access to the code, but otherwise they'll need to pop open Reflector to determine the correct parameters to pass in.  Instead of learning if the parameters are correct at compile-time, they have to wait until run-time.


Several alternatives exist before resorting to blob parameters:

  • Explicit parameters
  • Creation method
  • Factory patterns
  • IoC containers
  • Generics

Each of these alternatives can be used separately or combined together to achieve both Intention-Revealing Interfaces and simple code.

Explicit Parameters

Explicit parameters just means that members should ask explicitly what they need to operate through their signature.  The Process method would be changed to:

public class OrderProcessor
    public void Process(Order order, Customer customer)
        // etc.

The Process method takes exactly what components it needs to perform whatever operations it does, and clients of the OrderProcessor can deduce this simply through the method signature.

If this signature needs to change due to future requirements, you have a few choices to deal with this change:

  • Overload the method to preserve the existing signature
  • Just break the client code

People always assume it's a big deal to break client code, but in many cases, it's not.  Unless you're shipping public API libraries as part of your product, backwards compatibility is something that can be dealt with through Continuous Integration.

Creation Method/Factory Patterns/IoC Containers

When blob parameters start to invade constructors, it's a smell that you need some encapsulation around object creation.  Instead of dealing with changing requirements through blob parameters, encapsulate object creation:

public class OrderProcessor
    private readonly ISessionContext _context;

    public OrderProcessor() : this(ObjectFactory.GetInstance<ISessionContext>())

    public OrderProcessor(ISessionContext context)
        _context = context;

Originally, the ISessionContext was a hodgepodge of different object that I needed to pass in to the OrderProcessor class.  Since these dependencies became more complex, I encapsulated them into a parameter object, and introduced a factory method (the "ObjectFactory" class) to encapsulate creation.  Client code no longer needs to pass in a group of complex objects to create the OrderProcessor.


Sometimes blob parameters surface because a family of objects needs to be created or used, but all have different needs.  For example, the Command pattern that requires data to operate might look like this before generics:

public interface ICommand
    void Execute(object[] args);

public class TransferCommand : ICommand
    public void Execute(object[] args)
        Account source = (Account) args[0];
        Account dest = (Account) args[1];
        int amount = (int) args[2];

        source.Balance -= amount;
        dest.Balance += amount;

The ICommand interface needs to be flexible in the arguments it can pass to specific implementations.  We can use generics instead of blob parameters to accomplish the same effect:

public interface ICommand<T>
    void Execute(T args);

public struct Transaction
    public Account Source;
    public Account Destination;
    public int Amount;

    public Transaction(Account source, Account destination, int amount)
        Source = source;
        Destination = destination;
        Amount = amount;

public class TransferCommand : ICommand<Transaction>
    public void Execute(Transaction args)
        Account source = args.Source;
        Account dest = args.Destination;
        int amount = args.Amount;

        source.Balance -= amount;
        dest.Balance += amount;

Each ICommand implementation can describe its needs through the generic interface, negating the need for blob parameters.

No blobs

I've seen a lot of misleading, opaque code, but blob parameters take the prize for "Intention-Concealing Interfaces".  Instead of components being extensible, they're inflexible, brittle, and incomprehensible.  Before going down the path of creating brittle interfaces, exhaust all alternatives before doing so.  Every blob parameter I've created I rolled back later as it quickly becomes difficult to deal with.

Monday, November 26, 2007

String extension methods

I got really tired of the IsNullOrEmpty static method, so I converted it into an extension method:

public static class StringExtensions
    public static bool IsNullOrEmpty(this string value)
        return string.IsNullOrEmpty(value);

Since that method was already static, I just put a simple wrapper around it.  All scenarios pass:

public void Matches_existing_behavior()

    string value = null;


    value = string.Empty;


    value = "blarg";


The only rather strange thing about extension methods is that they can be called by null references, as shown in the test above.  As long as everyone understands that extension methods are syntactic sugar and not actually extending the underlying types, I think it's clear enough.

The extension method is more readable and lets me work with strings instead of having to remember that it's a static method.  Static methods that reveal information about instances of the same type seem to be prime candidates for extension methods.

Some 80/20 feedback

Jeff had an interesting post today on putting programmers into two bins, an 80% and a 20%.  The 20% were the alpha-geeks, and the 80% were everyone else,
"Morts" for lack of a better term.  One characterization was that the 20% folks let tech-related items push past work hours, while the 80% limited tech stuff to between the hours of 8 and 5.  I still don't like the idea of pushing people into bins, but there's only really 3 types of people I care about:

  • Those that lead
  • Those that listen
  • Those that don't listen

Although I've had a relatively short experience in development so far, I've only met maybe one or two people that just didn't listen.  Reasons why they don't listen might be indifference, personal, political, etc.  These people are very few, maybe 1-5% of the people I've worked with, but those are the ones I work to limit my involvement with.

I want leaders, but not all leaders as too many alpha geeks can lead to severe ego clashes.  I don't want all listeners as someone has to set direction.  But as long as I have at least one leader and enough listeners, it's enough to deliver.

Also, it's a mistake to have a goal to bring the 80% up into the 20%.  If someone wants to be in the 20%, fine, but it's not a prerequisite for a successful project.  Being in the 80% bracket just means you have different priorities, but not that you don't care.  Saying the 80% just don't care is misleading, polarizing and narcissistic.

When I used to interview people, I cared less about years of experience, certifications and the like.  Our most successful hires were those who showed a willingness to learn and improve, and an openness to listen and try new approaches.  It only took us about 3-6 months to train good mid-level developers, but it can take years (or never) to convince senior-level folks to listen.

Tuesday, November 20, 2007

Stop the madness

I've been extending a legacy codebase lately to make it a bit more testable, and a few small, bad decisions have slowed my progress immensely.  One decision isn't bad in and of itself, but a small bad decision multiplied a hundred times leads to significant pain.  It's death by a thousand cuts, and it absolutely kills productivity.  Some of the small decisions I'm seeing are:

  • Protected/public instance fields
  • Inconsistent naming conventions
  • Try-Catch-Publish-Swallow
  • Downcasting

The pains of these bad decisions can wreak havoc when trying to add testability to legacy codebases.

Public/protected instance fields

One of the pillars of OOP is encapsulation, which allows clients to use an object's functionality without knowing the details behind it.  From FDG:

The principle states that data stored inside an object should be accessible only to that object.

Followed immediately by their guideline:

DO NOT provide instance fields that are public or protected.

It goes on to say that access to simple public properties are optimized by the JIT compiler, so there's no performance penalty in using better alternatives.  Here's an example of a protected field:

public class Address
    protected string zip;

    public string Zip
        get { return zip; }

public class FullAddress : Address
    private string zip4;

    public string Zip4 
            if (Zip.Contains("-"))
                zip4 = zip.Substring(zip.IndexOf("-") + 1);
                zip = zip.Substring(0, zip.IndexOf("-"));
            return zip4;

There was an originally good reason to provide the derived FullAddress write access to the data in Address, but there are better ways to approach it.  Here's a better approach:

public class Address
    private string zip;

    public string Zip
        get { return zip; }
        protected set { zip = value; }

I've done two things here:

  • Added a protected setter to the Zip property
  • Changed the access level of the field to private

Functionally it's exactly the same for derived classes, but the design has greatly improved.  We should only declare private instance fields because:

  • Public/protected/internal violates encapsulation
  • When encapsulation is violated, refactoring becomes difficult as we're exposing the inner details
  • Adding a property later with the same name breaks backwards binary compatibility (i.e. clients are forced to recompile)
  • Interfaces don't allow you to declare fields, only properties and methods
  • C# 2.0 added the ability to declare separate visibility for individual getters and setters

There's no reason to have public/protected instance fields, so make all instance fields private.

Inconsistent naming conventions

Names of classes, interfaces, and members can convey a great deal of information to clients if used properly.  Here's a good example of inconsistent naming conventions:

public class Order
    public Address address { get; set; }

public class Quote : Order
    public void Process()
        if (address == null)
            throw new InvalidOperationException("Address is null");

When I'm down in the "Process" method, what is the "address" variable?  Is it a local variable?  Is it a private field?  Nope, it's a property.  Since it's declared camelCase instead of PascalCase, it led to confusion on the developer's part about what we were dealing with.  If it's local variable, which the name suggests, I might treat the value much differently than if it were a public property.

Deviations from FDG's naming conventions cause confusion.  When I'm using an .NET API that uses Java's camelCase conventions, it's just one more hoop I have to jump through.  In places where my team had public API's we were publishing, it wasn't even up for discussion whether or not we would follow the naming conventions Microsoft used in the .NET Framework.  It just happened, as any deviation from accepted convention leads to an inconsistent and negative user experience.

It's not worth the time to argue whether interfaces should be prefixed with an "I".  That was the accepted convention, so we followed it.  Consistent user experience is far more important than petty arguments on naming conventions.  If I developed in Java, I'd happily use camelCase, as it's the accepted convention.

Another item you may notice as there are no naming guidelines for instance fields.  This reinforces the notion that they should be declared private, and the only people who should care about the names are the developers of that class and the class itself.  In that case, just pick a convention, stick to it, and keep it consistent across your codebase so it becomes one less decision for developers.

Try Catch Publish Swallow

Exception handling can really wreck a system if not done properly.  I think developers might be scared of exceptions, given the number of useless try...catch blocks I've seen around.  Anders Heljsberg notes:

It is funny how people think that the important thing about exceptions is handling them. That is not the important thing about exceptions. In a well-written application there's a ratio of ten to one, in my opinion, of try finally to try catch. Or in C#, using statements, which are like try finally.

Here's an example of a useless try-catch:

public class OrderProcessor
    public void Process(Order order)
        catch (Exception ex)

In here we have Try Catch Publish Swallow.  We put the try block around an area of code that might fail, and catch exceptions in case it does.  To handle the exception, we publish it through some means, and then nothing.  That's exception swallowing.

Here's a short list of problems with TCPS:

  • Exceptions shouldn't be used to make decisions
  • If there is an alternative to making decisions based on exceptions, use it (such as the "as" operator in the above code)
  • Exceptions are exceptional, and logging exceptions should be done at the highest layer of the application
  • Not re-throwing leads to bad user experience and bad maintainability, as we're now relying on exception logs to tell us our code is wrong

Another approach to the example might be:

public class OrderProcessor
    public void Process(Order order)
        Quote quote = order as Quote;
        if (quote != null)

The only problem this code will have is if "quote.Process()" throws an exception, and in that case, we'll let the appropriate layer deal with those issues.  Since I don't any resources to clean up, there's no need for a "try..finally".


I already wrote about this recently, but it's worth mentioning again.  I spent a great deal of time recently removing a boatload of downcasts from a codebase, and it made it even worse that the downcasts were pointless.  Nothing in client code was using any additional members in the derived class.  It turned out to be a large pointless hoop I needed to jump through to enable testing.

Regaining sanity

The problem with shortcuts and knowingly bad design decisions is that this behavior can become habitual, and the many indiscretions can add up to disaster.  I had a high school band director who taught me "Practice doesn't make perfect - perfect practice makes perfect."

By making good decisions every day, it becomes a habit.  Good habits practiced over time eventually become etched into your muscle memory so that it doesn't require any thought.  Until you run into a legacy codebase that is, and you realize how bad your old habits were.

ReSharper in VS 2008

UPDATE 11/26/2007

Jeffrey Palermo notes that you can use R# in VS 2008 by turning off a couple of features.

Ilya Ryzhenkov, product manager at JetBrains, gives detailed information on R# support in VS 2008 and talks about R# 4.0.

After installing VS 2008 Team Suite yesterday, I thought I'd try the ReSharper 3.0 for VS 2008 download.  The verdict?  Works great unless you use any C# 3.0 feature.  Here's one example of it blowing up (but not crashing):

I'm using:

  • var keyword
  • anonymous types
  • LINQ

That was a simple LINQ query to count the number of types in the .NET Framework that implement IDisposable and end in the word "Context" (to settle a bet).  Here's another example:

That's an experimental API for pre- and post-conditions (for pseudo-design-by-contract) that uses:

  • extension methods
  • lambda expressions

From JetBrains' website, they clearly state in the download page:

ReSharper 3.0 for Visual Studio 2008 (Codename "Orcas") only supports C# 2.0 and Visual Basic .NET 8 language features. C# 3.0 will be supported in the next major release of ReSharper.

Now I have a dilemma.  Using Visual Studio without ReSharper is like taking a trip in a horse and buggy instead of a car.  Sure, it'll get you there, but you'll have to deal with a lot of crap along the way.  Until R# v.Next comes out, I'll just have to deal with it.  If you're using VS 2008, don't install R# 3.0 if you plan on using C# 3.0.  Hopefully it'll come out soon, but I haven't seen any roadmaps yet, so I'll just wait.

Monday, November 19, 2007

Installing Visual Studio 2008

As has already been noted in numerous places, VS 2008 dropped today.  The installation process has been surprisingly smooth, given my past installation experiences.

Downloading went surprisingly quick (after I was able to get in to the subscriptions site, which took about 2 hours).  I averaged around 900KB/sec (that's bytes, not bits), about 10 times more than I expected on RTM day.

Cleaning up

First, you'll need to make sure that you don't have any Beta bits installed.  I had VS 2008 Beta 2 on my dev machine, and no problems cropped up during uninstallation.  I removed pretty much anything that had "2008" and "3.5" in its name, as I had installed quite a few of the Beta 2 releases.

Additionally, I removed them in the reverse order I installed them.  Don't uninstall VS 2008 before Team Explorer for example.  My lack of later installation issues might be attributed to this.


Even after I uninstalled the 2008 and 3.5 apps I found, the 2008 installer noted that I had some pre-release software still installed (.NET Compact Framework) that needed to be removed.  Their installers have always been good about letting you know about conflicting versions, but it's a relief that it plays well with Beta 2.

About halfway through the installation, the installer noted that I needed to close some apps.  These included IE and XML Notepad.  This always happens with me and various MS installers, but I primarily use Firefox, so I didn't have to close any windows I needed.

About an hour later, all of the installations finished (VS 2008 Team Suite, Team Explorer 2008, and MSDN).

First impressions

After the obligatory first-launch screen (which settings do you want?  Whatever ReSharper does!), I noticed that the "Recent Projects" list included the same set of projects as Beta 2, and they all opened with no issues.

The only problem so far is that there is no officially supported version of ReSharper for VS 2008 RTM, and the website says that the next major release will be the first officially supported version.  But let's be honest, any time spent without ReSharper just makes us love it even more, right?

Thursday, November 15, 2007

Progress made

I just wanted to highlight a quote from Scott Guthrie's recent post on MVC (emphasis mine):

VS 2008 Professional now includes built-in testing project support for MSTest (previously in VS 2005 this required a Visual Studio Team System SKU), and our default ASP.NET MVC project template automatically creates one of these projects when you use VS 2008

We'll also be shipping project template downloads for NUnit, MBUnit and other unit test frameworks as well, so if you prefer to use those instead you'll also have an easy one click way to create your application and have a test project immediately ready to use with it.

I know it's not all the way to "convention over configuration", but it is nice that the tool now says, "oh you want to create a website?  Ya gonna need tests."  Now it's a conscious decision on the part of the developer to "opt-out" of tests, instead of opting-in.  Just another way a tool can help guide good decisions.

As it's looking like unit testing is becoming a first-class citizen in both the tools and the frameworks, I think it's time for me to jump ship.  Mainstream is for chumps.  But seriously, it's nice to see that testing concerns are becoming more mainstream, though I think that it just means the community needs to double their efforts in this precarious stage, such that growth continues in the right direction.  I don't want to unwind a bunch of bad design, habits, and practices just because the tooling made it easy.

I do still get demos of "right-click class in Solution Explorer, click 'Generate Unit Tests', watch bugs disappear!", so there's some work needed on the marketing front.  Unfortunately, I saw a lot of not-so-technical decision makers nodding their head and asking, "can the unit test template also do null checking?"  "Can my tool be so awesome and RAD, can I skimp and hire brain-dead developers?"  "Does this mean I don't have to train people how to be good developers, just good tool-users?"

This is a tangent, but slick tooling can help lower the barrier, but it can also take the user in a direction they don't want to go (or don't know they shouldn't go).

Wednesday, November 14, 2007

A downcasting tragedy

And now, for another tale of legacy code woe, here's a gut-wrenching tragedy of OCP and downcasting, in five acts.


We have a feature we're implementing that when the store's configuration settings allow for a survey link to be shown, it should show up, otherwise, it shouldn't.  We put these settings in a simple interface (cut down for brevity):

public interface ISettings
    bool ShowSurvey { get; }

Settings can come from many places, such as configuration files, database, or web service.  This is a pretty nice example of OCP, as our app is open for extension for different "ISettings" implementations, but closed for modification as someone can't decide to use something else besides "ISettings".  Since we have an interface, the source doesn't matter.

We manage access to the settings through a static class:

public static class StoreContext
    public static ISettings Settings { get; set; }

Actual instances are set during early in the ASP.NET pipeline, so later stages have that information.  The original ISettings implementations are stored in ASP.NET Session.

Now, we need to provide functional and regression tests for this feature, using Selenium.

Rising action

To help with functional and regression tests, I'm employing the record and playback state technique I mentioned earlier.  I create an "ISettings" implementation that can be serialized and deserialized into XML, which helps when I want to manually edit the settings:

public class SettingsDto : ISettings
    public SettingsDto() { }

    public SettingsDto(ISettings settings)
        ShowSurvey = settings.ShowSurvey;

    public bool ShowSurvey { get; set; }

My "download.aspx" page combines both the cart and the settings information into a single DTO, and streams this file to the client.  In my "replay.aspx" page, I take the XML, deserialize it, and put it into Session.  Finally, I redirect to the "shoppingcart.aspx" page, which pulls the ISettings implementation out of Session and sets the StoreContext variable to my deserialized SettingsDto instance.

This is all fairly straightforward, and I'm pretty happy about this solution as I can now plug in any settings I like and verify the results through automated tests.  As it is with all legacy code, things just aren't that simple.


Running the code the first time, downloading and replaying goes smoothly, and all of my values are serialized and deserialized properly as my unit tests prove.  However, when I get redirected to the "shoppingcart.aspx" page, I get a distressing exception:

  Unable to cast object of type 'BusinessObjects.SettingsDto'
  to type 'BusinessObjects.DbSettings'.

The dreaded invalid cast exception, something is making some really bad assumptions and killing my code.

Falling action

Just when I thought I was making progress, the InvalidCastException came up and crushed my dreams.  Looking at the stack trace, I see that my the exception is coming from the base Page class that all pages in our app derive from.  Here's the offending code:

public abstract class StorePage : Page
    protected DbSettings Settings
        get { return (DbSettings)StoreContext.Settings; }

A convenience property on the base class allowed derived classes to get at the static ISettings instance.  But on closer inspection, this property is downcasting the ISettings instance coming out of StoreContext to DbSettings.  Downcasting is when you cast to a derived type from an instance of a base type.  That's a fairly large assumption to make, though.  How do I know for sure that the instance is really the derived type, and not some other derived type of the base type?  I can't guarantee that assumption holds, as this example shows.

To make things worse, it explicitly casts instead of using the C# "as" operator (TryCast for VB.NET folks).  At least then it won't fail when trying to downcast, it would just be null.  Using the "as" operator is perfectly acceptable as a means of downcasting.  I try to avoid it, as it can be a smell of Inappropriate Intimacy.

Because the code assumes that the ISettings instance is actually seriously for realz a DbSettings instance, I can never take advantage of OCP and swap out different "ISettings" implementations.  Quite depressing.


As an experiment, and because I have a nice automated build process, I can just try to change the type referenced in the StorePage class to ISettings instead of DbSettings.  Luckily, the build succeeded and I kept the change.

If it had failed and clients absolutely needed DbSettings, I might have to look at pushing up methods to the ISettings interface, if it's possible.  If I have access to ISettings that is.  If I don't, well, I'm hosed, and my denouement would turn into a catastrophe.

The moral of this story is:

Avoid downcasting, as it makes potentially false assumptions about the underlying type of an object.  Use the "as" operator instead, and perform null checking to determine if the downcasting succeeded or not.  We can then perform safe downcasting and avoid using exceptions as flow control.

Tip of the day

If you're going to physically demonstrate "handwaving", make sure you don't have hot chai tea in one of the waving hands.

Because it burns.

See also: Cause and effect

The sinking ship

An interesting quote from a colleague today (paraphrased):

Moving developers to projects on the legacy system is like rearranging chairs on the Titanic.

We can tidy it up, but it's still going to the bottom of the Atlantic.  Probably better just to catch the plane.

It came up in the context of investing strategically versus buying tactically.  It's not an either-or question, but it's important to be thinking about long-term goals and assigning resources appropriately to match your strategic view.  The Technical Debt metaphor can offer guidance here too, as buying tactically tends to max out your credit cards (so to speak), while investing strategically actually nets returns.

Monday, November 12, 2007

Coding without confidence

As a favor to my wife, I developed an Excel application to help her employer out with scheduling and status in a small wood manufacturing shop.  Excel was chosen as it was the lowest barrier to entry, and the quickest to get out the door.  It wasn't a mission-critical app and I wasn't getting paid or anything, so I thought I could get something delivered fairly easily.

After all, what was needed was "no big deal" and "we just need something that works".

The final solution used a lot of hidden columns to calculate intermediate results, and macros to make it easier for the end user to interact with the spreadsheet.  The problem is that there are absolutely zero tests on anything, nor can their ever be.  There's no such thing as "ExcelUnit".  I found a "VBAUnit", but it was last released over 4 years ago.

For a while, I couldn't figure out why I was so worried about releasing (i.e., emailing them a copy) the Excel app.  I realized that I had been coding without any confidence that what I was delivering worked, and would continue to work.

I've been so accustomed to having tests and automated builds as my backup, coding without them felt dirty and a little scary.  I was asked "so does it work?" and my only answer was "I hope so".  I was entrenched in "CyfDD", or "Cross-your-fingers Driven Development".  Does it work?  I dunno, better cross your fingers.  CyfDD is a close cousin of "GoykapDD", or "Get-on-your-knees-and-pray Driven Development".  Releasing only once per year or two leads to GoykapDD.

Sometimes I get asked why I bother with automated tests and builds.  Frankly, without automated tests and builds, I have zero confidence in my code.  Nor should I, or should anyone else for that matter.  Confidence in code written without automated tests and builds is just delusion.

Not a good sign

At the end of lunch today at a Chinese restaurant, I cracked open the obligatory fortune cookie to learn what a stale, shrink-wrapped, mass-produced dessert could portend.

It was empty.

I ask for insight into my fate, and the gods answer in their infinite wisdom, "Mu".

Coming after a big launch at work, I really didn't know what to think about this one.  Can't I just get one cookie tell me "Your launch will be great success.  And the Rockets will win tonight."  Is that really too much to ask?

Wednesday, November 7, 2007

Some NAnt tips

Jason got me thinking on some other NAnt tips from his post on listing the targets in the build file.

Include NAnt in your source control

I like to put NAnt in source control and NEVER install it.  I've had too many problems upgrading NAnt and having projects break because of whatever reason, and I have even more issues with custom tasks I write.  It helps to keep your dev machine clean, your build machine clean, and leads to faster dev machine setup time.  Here's my current favorite source tree structure:

  • <ProjectName/Version/Branch>
    • src
      • Where source code goes
    • lib
      • any Dlls referenced from projects, including NUnit dll's
    • tools
      • NAnt
      • NUnit
      • NCover
      • any other exe needed for build/development
    • docs
    • project.sln
    • go.bat

This was all inspired by (stolen from) TreeSurgeon.

Put your build file in your solution

I like to create a solution folder to hold extra project-related files, like my build scripts.  When they're in the solution, it makes it easier to load them up and edit them.  Since ReSharper supports many features for NAnt and MSBuild scripts, it makes sense to have them easy to get to.

I already talked about this earlier, but solution folders are a great way to create logical organization in your solution that doesn't have to correspond to the physical organization on disk.

Create a NAnt bootstrap batch file

I can't stand typing in "nant -buildfile:blahblahblah", especially when I execute the NAnt version that's in my source tree (not the one installed).  I like to create a one-line batch file called "go.bat" that bootstraps NAnt and my build file:

@tools\nant\NAnt.exe %*

The @ symbol tells cmd.exe not to echo the command text out, then I specify the location of NAnt and the build file.  The last part, "%*", just tells cmd.exe to append all commands after "go.bat" to the "nant.exe" program.  For example, I can now do this:

go clean test deploy

The "clean test deploy" commands are targets in my NAnt build script.  Any other NAnt command line options can be typed in after "go".  I put this batch file at the project root level.

Use the "description" attribute to document public targets

As Jason mentioned, the "-projecthelp" NAnt switch will output a list of targets in your NAnt script.  By providing a description in your target, you can provide some better information in the help output.  Here's what NBehave's help looks like:

The targets are separated into Main and Sub Targets.  Main Targets are targets with descriptions, and Sub Targets are those without.  I can still provide internal documentation with XML comments in the build script.  By only providing descriptions on targets that I need to call often, I can create some level of encapsulation in my build file.

I'm sure there are a dozen better tips out there, but these are the ones I make sure to use on every project I'm involved with.

Tuesday, November 6, 2007

Joining Los Techies

How very rude of me, looking back I forgot to mention I've joined Los Techies.  A few things drew me to joining:

  • Name sounds very similar to a certain cerveza
  • The promise of a free T-Shirt
  • Being part of a community is important to me, and the shadier the better
  • My continuing quest to lower the bar wherever I go
  • Peer intimidation factor lowered by lack of MVPs, unlike other communities

My feed URL will stay the same (, and I'll cross-post between the two sites as Google seems to like the Blogspot site. I'll continue to respond to comments on each as well.  However, I'd recommend additionally subscribing to the community feed (  You'll have access to a greater breadth and depth of content than I could ever hope to create.

Guidance over governance

Being part of a large IT organization, I've become acutely aware of the need to provide a central policy or governance over critical IT operations.  While greenfield/product teams can largely operate in independence, the risk associated with large, complex, and business-critical systems demands some degree of oversight.  However, an IT governance team can fall into some huge traps in their effort to set and enforce policy:

  • Overstepping their bounds and enforcing governance where only guidance is needed
  • Failing to measure the outcomes of governance decisions

For governance bodies to be effective, there needs to be accountability for decisions made.  If teams feel negatively affected by governance decisions, there needs to be a feedback loop back into the decision-making process.  Strong leadership needs to be in place to ensure fact-based decisions and not personal, political, or selfishly reasoned decisions.  As governance has so many opportunities to fail, it becomes even more important to harvest policies internally, grow them from observed best-practices, and encourage adoption through prescriptive guidance.

Since how a team delivers is at least as important as what a team delivers, how can IT influence the "how"?  There are basically two approaches:

  • Guidance
  • Governance

From the team's perspective, the only difference between the two is where the decision gets made to incorporate a policy.  With guidance, the team makes the decision, and with governance, someone else does.  This not-so-subtle difference should be a top consideration when IT decides to enforce policy through governance or encourage policy through guidance.

Removing decisions

One of the key ingredients in self-organizing teams is putting the trust in the team to make decisions in how and what to deliver.  Attempts to remove any abilities to make decisions can quickly have negative impacts to the team unless care is taken to get buy-in from the team.

The more decisions are removed from the team, the more the team feels like an un-creative cog in a machine or a code monkey.  Strong teams thrive on the opportunity to be creative, and teams without creative opportunities become lethargic, disinterested, and largely brain-dead.  It's an enormous waste of resources not to harvest talent available, so the bar needs to be exceedingly high when removing decisions from the team.

It's possible to allow team decisions to be made concerning policy by choosing the scope of the policy.  For example, instead of mandating TDD, require that features have automated tests, then provide guidance on how to do that.  Teams can then decide on specifics.

Favoring guidance

Policies are more effective if they represent a set of core values and principles.  Certain meta-principles are appropriate for governance, such as "automated builds" or "continuous integration".  Once IT governance crosses the line into specifics, that's when decisions are removed from teams and problems arise.

Instead, IT governance bodies can offer guidance on specifics and even support.  They can provide guides to help teams get started, pointing teams in the right direction.  Some teams may have enough internal experience to feed back their experiences back into the governance team, providing even more collective knowledge that the entire organization can take advantage of.

As the saying goes, "people are our most important asset", so if accountability is held for a team, it needs to be able to make decisions that ensure success.  If the team is accountable for decisions they didn't make, morale suffers accordingly.  By favoring guidance, the governance team can focus more on growing appropriate policies by measuring the impact and success of their guidance.

Friday, November 2, 2007

Record and playback state in ASP.NET

Most ASP.NET applications hold various state objects in Session, Application, or other stateful mediums.  For regression testing and defect reproducing purposes, often I want to capture the exact state of these objects and replay them back at a later time, without needing to walk through the exact steps needed to setup that state.  For many defects, I don't know how the state got to where it was, I just know what the state is, so I need a mechanism to record it and play it back.

I can record and playback the state by first capturing the state by downloading a serialized version of the object, and then play it back later by uploading the serialized object file and resuming the application.

Recording state

The basic principle for recording state is that I'd like to serialize one of my state objects and allow the client to download the serialized object as a file.  With direct access to the output stream of the HttpResponse, this turns out to be fairly trivial.

Setting up the state

First, we need some sort of state object.  I'm not too creative, so here's a Cart and Item object, optimized for XML serialization:

public class Item
    public Guid Id { get; set; }
    public string Name { get; set; }
    public int Quantity { get; set; }
    public decimal Price { get; set; }

public class Cart 
    public Guid Id { get; set; }
    public string CustomerName { get; set; }

    [XmlArray(ElementName = "Items"), XmlArrayItem(ElementName = "Item")]
    public Item[] Items { get; set; }

Normally I don't put read/write arrays, but to demonstrate the easiest possible serialization scenario, that will do.  For more difficult scenarios, I usually implement IXmlSerializable or provide a separate DTO class strictly for serialization concerns.

In my Default.aspx page, I'll provide a simple button to download the cart, and some code in the code-behind to create a dummy cart and respond to the button click event:

public partial class _Default : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        Cart cart = new Cart();

        cart.Id = Guid.NewGuid();
        cart.CustomerName = "Bob Sacamano";
        Item item = new Item();
        item.Id = Guid.NewGuid();
        item.Name = "Toothpicks";
        item.Price = 100.50m;
        item.Quantity = 99;

        Item item2 = new Item();
        item2.Id = Guid.NewGuid();
        item2.Name = "Ceramic mug";
        item2.Price = 5.49m;
        item2.Quantity = 30;

        cart.Items = new Item[] { item, item2 };

        Session["Cart"] = cart;

    protected void btnDownloadCart_Click(object sender, EventArgs e)

Now that I have a Cart object, I need to fill in the "download.aspx" part to allow downloads of the cart.

Outputting state to HttpResponse

The HttpResponse object has a couple of ways to access the raw byte stream sent to the client.  The first is the OutputStream property, which gives me access to a raw Stream object.  Streams are used by a multitude of objects to write raw bytes, such as serializers, etc.

I don't normally use it when I want the end-user to save the file.  If I'm creating a dynamic image or such, I can go directly against the OutputStream, but an alternate way is to use the HttpResponse.BinaryWrite method.  This will allow me to buffer content as needed, and more importantly, provide buffer size information to the end user.  That's especially important for the various browser "download file" dialogs.

Here's my Page_Load event handler for the "download.aspx" page:

   1:  protected void Page_Load(object sender, EventArgs e)
   2:  {
   3:      Cart cart = Session["Cart"] as Cart;
   5:      if (cart == null)
   6:      {
   7:          Response.Write("cart is null");
   8:          Response.End();
   9:          return;
  10:      }
  12:      try
  13:      {
  14:          MemoryStream ms = new MemoryStream();
  15:          XmlSerializer xmlSerializer = new XmlSerializer(typeof(Cart));
  17:          XmlTextWriter outputXML = new XmlTextWriter(ms, Encoding.UTF8);
  18:          xmlSerializer.Serialize(outputXML, cart);
  19:          outputXML.Flush();
  20:          ms.Seek(0, SeekOrigin.Begin);
  22:          byte[] output = ms.ToArray();
  23:          outputXML.Close();
  24:          ms.Close();
  26:          Response.ContentType = "application-octetstream";
  27:          Response.Cache.SetCacheability(HttpCacheability.NoCache);
  28:          Response.AppendHeader("Content-Length", output.Length.ToString());
  29:          Response.AppendHeader("Content-Disposition", "attachment; filename=cart.xml; size=" + output.Length);
  30:          Response.AppendHeader("pragma", "no-cache");
  31:          Response.BinaryWrite(output);
  32:          Response.Flush();
  33:      }
  34:      catch (Exception ex)
  35:      {
  36:          Response.Clear();
  37:          Response.ContentType = "text/html";
  38:          Response.Write(ex.ToString());
  39:      }
  40:  }

In the first section through line 10, I retrieve the state object (cart) from Session and make sure it actually exists.  If it doesn't, I'll just send an error message back to the client.

In lines 14 through 24, I create the XmlSerializer to serialize the "cart" variable, and call Serialize to serialize the cart out to the MemoryStream object.  MemoryStream provides an in-memory Stream object that doesn't require you to write to files, etc. for stream actions.  Instead, it basically uses an internal byte array to store stream data.

I could have passed the HttpResponse.OutputStream directly to the XmlTextWriter's constructor, but I would lose the ability to buffer or give size information back to the client, so I just dump it all into a byte array.

Finally, in lines 26 through 32, I set up the Response object appropriately and call BinaryWrite to write the raw byte array out to the client.  I disable caching as I want the client to bypass any server or client cache and always download the latest Cart.  Also, I set the appropriate headers to direct the client to download instead of just view the XML.  If I didn't include the "Content-Disposition" header, most likely the client would simply view the XML in their browser.  That's done through the value "attachment".

I can also control what filename gets created on the client side with the "filename" value in the "Content-Disposition" header.  Without that, a dummy or blank filename would be shown.  Here's what it looks like after I click the "Download cart" button:

Notice that both the filename and file size are populated correctly.  I can save this serialized Cart object anywhere I like now as a perfect snapshot of our state object, ready to be played back.  Also, the "cart.xml" file doesn't exist anywhere on the server.  It only exists in server memory and is streamed directly to the client.

Playing it back

Now that I have a page that allows me to download my Cart from Session, I need a page where I can upload it back up and continue through the application.  This can be accomplished through the FileUpload control and some de-serialization magic.

Uploading the saved state

First, I'll need to add a couple of controls to my "Replay.aspx" page to handle the file upload and to initiate the replay:

<asp:fileupload id="fuCart" runat="server" />
<br />
<asp:button id="btnReplay" onclick="btnReplay_Click" runat="Server" text="Replay" />

In the click handler, I need to read the file from the FileUpload control and deserialize that into a Cart object:

protected void btnReplay_Click(object sender, EventArgs e)
    XmlSerializer xmlSerializer = new XmlSerializer(typeof(Cart));
    Cart cart = (Cart)xmlSerializer.Deserialize(fuCart.PostedFile.InputStream);
    Session["Cart"] = cart;


Once the cart is downloaded and deserialized, I set the Session variable to the new cart instance and proceed directly to the ShoppingCart.aspx page.  I can verify that my Cart object has the exact same state as the one I saved earlier by examining it in the debugger:

I can see all of the IDs are correct, the number of items are correct, and all of the data values match the original cart saved out.  When I proceed to the cart page, my state will be exactly where it was when I originally downloaded the cart.


If one of our QA people encountered a defect, they can now save off their state and attach it to our defect tracking system, allowing us to replay exactly what their state was and reproduce the defect.  Record/playback is also perfect for regression test suites, which sometimes need to rely on specific inputs (instead of workflows) to reproduce defects.  Combining this ability with WatiN or Selenium can give me a very powerful testing tool.

When defects are found, we can download the state of the user when replaying their workflow is difficult, impossible, or we just don't know what happened.  Through replaying a user's exact state, we can quickly determine root causes for failures and create regression tests to ensure the defects do not surface again.

Just make sure you take care to secure the "record" and "playback" pages appropriately.  You probably won't want that available to customers.