Wednesday, October 31, 2007

Bizarro-tive development

Anyone familiar with Superman also knows about Bizarro, a doppelganger of Superman.  Bizarro looks like Superman, but is opposite in every way.  Instead of saving people, he kills them.  Instead of eloquent speech, he talks like Tarzan.  He's not from Earth, but from Htrea (Earth backwards, clever).  And so on.

A colleague reminded me again of the challenges of explaining iterative development to a manager who had been living with waterfall for many years.  The timeboxing concept came through, but iterative and incremental, not so much.

We drew our original six-month schedule on the board and chopped it up into monthly iterations, saying we would deliver at the end of each month.  We should have been more specific on what "deliver" meant, because this was what the manager then suggested:

  • Iteration 1: Envisioning
  • Iteration 2: Planning
  • Iteration 3: Development
  • Iteration 4: Refactoring
  • Iteration 5: Testing
  • Iteration 6: Stabilization

Each iteration is timeboxed, which is good, but this is just slapping arbitrary exit points on waterfall.  Not iterative, but bizarro-tive development.  Once we explained further, this was the next suggestion:

  • Iteration 1
    • Week 1: Envisioning/Planning
    • Week 2: Development
    • Week 3: Refactoring
    • Week 4: Testing/Stabilization
  • Iteration 2 - lather, rinse, repeat

Still not exactly what we were talking about.  True iterative development has most or all aspects happen every day, and some points in the iteration some things will happen more than others.  But we're never doing just one aspect, and slapping phases into iterations isn't iterative development, but it's what Bizarro might do.

Coding responsibly

I just ordered Kent Beck's newest book, Implementation Patterns.  In the sample chapter online, there's a great quote at the end of the preface:

As a programmer you have been given time, talent, money, and opportunity. What will you do to make responsible use of these gifts?

That pretty much sums up why I started caring about the code I delivered when I started my career.  After seeing someone else's insane legacy codebase for the first time, I felt a need to learn to code responsibly.  Coding irresponsibly wastes time and money, and life's too short to waste on sloppy code.

By the way, if you register with informIT, you get a 30% member discount that usually brings the price lower than Amazon.

Tuesday, October 30, 2007

Oslo = MDA + SOA

Yeah, yeah, too many acronyms.  One of the biggest challenges in large enterprises is getting all of the disparate systems and applications to talk to each other in a well-defined, agreed-upon manner.  Announced here and blogged here, it looks like Microsoft is hoping to fill some of the gaps other big players in the SOA world (like IBM) have already filled.  Here's a quote describing Oslo from Microsoft:

Rather than having models being imported and exported and generating code, the model is the application and that breaks down the silos.

Oslo looks to bring model-driven architecture and SOA together with WCF, BizTalk, Visual Studio and other Microsoft tools and technologies to provide a single Microsoft solution for connected and distributed systems.  For me this is good news because the other big players like IBM and Oracle are mostly or exclusively Java, so it will be nice to have an answer from the .NET side.

Thursday, October 25, 2007

RSpec gone wrong

I've seen some weird things in code comments, but with RSpec, you can take programming humor to a different level.  Don't let your customers see these, though.  Here are a few RSpec specifications gone completely wrong:

#this crap needs to be refactored. it makes no sense.
describe QueryFacade, "when querying for opportunities" do
  it "should be refactored" do
    flunk
  end
end

That was from Terry, here's another I found floating around somewhere:

describe "Buffalo Bill" do
  it "places the lotion in the basket" do
  end
end

That a Silence of the Lambs reference, folks.  And finally:

describe "our insane customer Initech" do
  it "should stop asking for ridiculous features and make up their $%^*ing mind" do
  end
end

Waaaay too much free time...

Specifications versus validators

Joe posed a great question on my recent entity validation post:

I question the term Validator in relation to DDD.  Since the operation of the Validator seems to be a simple predicate based on business rule shouldn't the term Specification [Evans Pg227] be used instead?

On the surface, it would seem that validation does similar actions as specifications, namely that it performs a set of boolean operations to determine if an object matches or not.

For those unfamiliar with the Specification pattern, it provides a succinct yet powerful mechanism to match an object against a simple rule.  For example, let's look at an expanded version of the original Order class on that post:

public class Order
{
    public int Id { get; set; }
    public string Customer { get; set; }
    public decimal Total { get; set; }
}

A Specification class is fairly simple, it only has one method that takes the entity as an argument:

public interface ISpecification<T>
{
    bool IsSatisfiedBy(T entity);
}

I can then create specific specifications based on user needs.  Let's say the business wants to find orders greater than a certain total, but the total can change.  Here's specification for that:

public class OrderTotalSpec : ISpecification<Order>
{
    private readonly decimal _minTotal;

    public OrderTotalSpec(decimal minTotal)
    {
        _minTotal = minTotal;
    }

    public bool IsSatisfiedBy(Order entity)
    {
        return entity.Total >= _minTotal;
    }
}

Specs by themselves aren't that useful, but combined with the Composite pattern, their usefulness really shines:

public class AndSpec<T> : ISpecification<T>
{
    private readonly List<ISpecification<T>> _augends = new List<ISpecification<T>>();

    public AndSpec(ISpecification<T> augend1, ISpecification<T> augend2) 
    {
        _augends.Add(augend1);
        _augends.Add(augend2);
    }

    public AndSpec(IEnumerable<ISpecification<T>> augends)
    {
        _augends.AddRange(augends);
    }

    public void Add(T augend)
    {
        _augends.Add(augend);
    }

    public bool IsSatisfiedBy(T entity)
    {
        bool isSatisfied = true;
        foreach (var augend in _augends)
        {
            isSatisfied &= augend.IsSatisfiedBy(entity);
        }
        return isSatisfied;
    }
}

Adding an "OrSpec" and a "NotSpec" allows me to compose some arbitrarily complex specifications, which would otherwise clutter up my repository if I had to make a single search method per combination:

OrderRepository repo = new OrderRepository();
var joesOrders = repo.FindBy(
    new AndSpec(
        new OrderTotalSpec(100.0m),
        new NameStartsWithSpec("Joe")
    ));

Given the Specification pattern (fleshed out into excruciating detail), why can't I compose validation into a set of specifications?  Let's compare and contrast the two:

Specification

  • Matches a single aspect on a single entity
  • Performs positive matching (i.e., return true if it matches)
  • Executed against a repository or a collection
  • Can be composed into an arbitrarily complex search context, where a multitude of specifications compose one search context
  • "I'm looking for something"

Validator

  • Matches as many aspects as needed on a single entity
  • Performs negative matching (i.e., returns false if it matches)
  • Executed against a single entity
  • Is intentionally not composable, a single validator object represents a single validation context
  • "I'm validating this"

So although validation and specifications are doing similar boolean operations internally, they have very different contexts on which they are applied.  Keeping these separate ensures that your validation concerns don't bleed over into your searching concerns.

Wednesday, October 24, 2007

Entity validation with visitors and extension methods

On the Yahoo ALT.NET group, an interesting conversation sprung up around the topic of validation.  Entity validation can be a tricky beast, as validation rules typically depend on the context of the operation (persistence, business rules, etc.).

In complex scenarios, validation usually winds up using the Visitor pattern, but that pattern can be slightly convoluted to use from client code.  With extension methods in C# 3.0, the Visitor pattern can be made a little easier.

Some simple validation

In our fictional e-commerce application, we have a simple Order object.  Right now, all it contains are an identifier and the customer's name that placed the order:

public class Order
{
    public int Id { get; set; }
    public string Customer { get; set; }
}

Nothing too fancy, but now the business owner comes along and requests some validation rules.  Orders need to have an ID and a customer to be valid for persistence.  That's not too hard, I can just add a couple of methods to the Order class to accomplish this.

The other requirement is to have a list of broken rules in case the object isn't valid, so the end user can fix any issues.  Here's what we came up with:

public class Order
{
    public int Id { get; set; }
    public string Customer { get; set; }

    public bool IsValid()
    {
        return BrokenRules().Count() > 0;
    }

    public IEnumerable<string> BrokenRules()
    {
        if (Id < 0)
            yield return "Id cannot be less than 0.";

        if (string.IsNullOrEmpty(Customer))
            yield return "Must include a customer.";

        yield break;
    }
}

Still fairly simple, though I'm starting to bleed other concerns into my entity class, such as persistence validation.  I'd rather not have persistence concerns mixed in with my domain model, it should be another concern altogether.

Using validators

Right now I have one context for validation, but what happens when the business owner requests display validation?  In addition to that, my business owner now has a black list of customers she won't sell to, so now I need to have a black list validation, but that's really separate from display or persistence validation.  I don't want to keep adding these different validation rules to Order, as some rules are only valid in certain contexts.

One common solution is to use a validation class together with the Visitor pattern to validate arbitrary business/infrastructure rules.  First, I'll need to define a generic validation interface, as I have lots of entity classes that need validation (Order, Quote, Cart, etc.):

public interface IValidator<T>
{
    bool IsValid(T entity);
    IEnumerable<string> BrokenRules(T entity);
}

Some example validators might be "OrderPersistenceValidator : IValidator<Order>", or "CustomerBlacklistValidator : IValidator<Customer>", etc.  With this interface in place, I modify the Order class to use the Visitor pattern.  The Visitor will be the Validator, and the Visitee will be the entity class:

public interface IValidatable<T>
{
    bool Validate(IValidator<T> validator, out IEnumerable<string> brokenRules);
}

public class Order : IValidatable<Order>
{
    public int Id { get; set; }
    public string Customer { get; set; }

    public bool Validate(IValidator<Order> validator, out IEnumerable<string> brokenRules)
    {
        brokenRules = validator.BrokenRules(this);
        return validator.IsValid(this);
    }
}

I also created the "IValidatable" interface so I can keep track of what can be validated and what can't.  The original validation logic that was in Order is now pulled out to a separate class:

public class OrderPersistenceValidator : IValidator<Order>
{
    public bool IsValid(Order entity)
    {
        return BrokenRules(entity).Count() > 0;
    }

    public IEnumerable<string> BrokenRules(Order entity)
    {
        if (entity.Id < 0)
            yield return "Id cannot be less than 0.";

        if (string.IsNullOrEmpty(entity.Customer))
            yield return "Must include a customer.";

        yield break;
    }
}

This class can now be in a completely different namespace or assembly, and now my validation logic is completely separate from my entities.

Extension method mixins

Client code is a little ugly with the Visitor pattern:

Order order = new Order();
OrderPersistenceValidator validator = new OrderPersistenceValidator();

IEnumerable<string> brokenRules;
bool isValid = order.Validate(validator, out brokenRules);

It still seems a little strange to have to know about the correct validator to use.  Elton wrote about a nice trick with Visitor and extension methods that I could use here.  I can use an extension method for the Order type to wrap the creation of the validator class:

public static bool ValidatePersistence(this Order entity, out IEnumerable<string> brokenRules)
{
    IValidator<Order> validator = new OrderPersistenceValidator();

    return entity.Validate(validator, brokenRules);
}

Now my client code is a little more bearable:

Order order = new Order();

IEnumerable<string> brokenRules;
bool isValid = order.ValidatePersistence(out brokenRules);

My Order class doesn't have any persistence validation logic, but with extension methods, I can make the client code unaware of which specific Validation class it needs.

A generic solution

Taking this one step further, I can use a Registry to register validators based on types, and create a more generic extension method that relies on constraints:

public class Validator
{
    private static Dictionary<Type, object> _validators = new Dictionary<Type, object>();

    public static void RegisterValidatorFor<T>(T entity, IValidator<T> validator)
        where T : IValidatable<T>
    {
        _validators.Add(entity.GetType(), validator);
    }

    public static IValidator<T> GetValidatorFor<T>(T entity)
        where T : IValidatable<T>
    {
        return _validators[entity.GetType()] as IValidator<T>;
    }

    public static bool Validate<T>(this T entity, out IEnumerable<string> brokenRules)
        where T : IValidatable<T>
    {
        IValidator<T> validator = Validator.GetValidatorFor(entity);

        return entity.Validate(validator, out brokenRules);
    }
}

Now I can use the extension method on any type that implements IValidatable<T>, including my Order, Customer, and Quote classes.  In my app startup code, I'll register all the appropriate validators needed.  If types use more than one validator, I can modify my registry to include some extra location information.  Typically, I'll keep all of this information in my IoC container so it can all get wired up automatically.

Visitor patterns are useful when they're really needed, as in the case of entity validation, but can be overkill sometimes.  With extension methods in C# 3.0, I can remove some of the difficulties that Visitor pattern introduces.

Friday, October 19, 2007

Dependency Breaking Techniques: Inline Static Class

Often times I run into a class that has a dependency not on a Singleton, but a static class.  When refactoring away from a Singleton, a common approach is to use Inline Singleton.  With static classes, a slightly different approach needs to be taken because client code isn't working with an instance of a type, but rather static methods on the type itself.

Dependency breaking techniques are used to get legacy code under test.  The goal isn't necessarily to break all client dependencies out, but just the one I'm modifying to get under test.  I'm not interested changing any of the other clients that may use this static class.  Since nothing is under test, it's too risky to make non-backwards compatible changes.

The pattern

Code needs access to a method or property but doesn't need a global point of access to it.

Move Static's features to instance methods of that type.

Motivation

Helper methods tend to congregate into static classes with static methods.  Over time, the responsibility grows for the static class until it becomes a dumping ground to any behavior that seems relevant to the name.  The name of the static class usually is post-fixed by "Helper", "Manager", "Utility", or other generic and obtuse names.

Eventually, this static class will have opaque dependencies of its own, where callers don't know what's happening behind the scenes when those helper methods are called.  This can wreak havoc with unit testing, especially when trying to add tests to legacy code.

Mechanics

  1. Find the static method in the calling class and use Extract Interface to extract an interface that contains only the static method being called in the client code.
  2. Make the static class explicitly implement the new extracted interface.  The static class should have two methods with identical signatures, one a static method, one an explicitly implemented interface method.
  3. Use Move Method to move the logic from the static method to the instance method.
  4. Use Hide Delegate to delegate the static method calls to the use the instance method instead.
  5. Compile and test.
  6. Modify Client code using the static Operation method to instantiate an IStatic instance and call the interface Operation method.
  7. Compile and test.
  8. Use Extract Parameter to pass in the IStatic instance to the Client method.  If IStatic is a primal dependency, extract the parameter to a constructor argument and set the variable to a local field.
  9. Compile and test.

Example

In our e-commerce system, one of the pages the user is presented with is a payments page.  It is in this page that the user decides how they want to pay for their order, whether it's credit card, invoice, financing, etc.

Not all payment types should be displayed for all carts, and there is some complex business logic that determines who sees what payment types.  There are several payment filtering strategies to encapsulate this business logic, and one type uses the user's profile to filter the payments.  Here's the strategy class:

public class AccountFilter
{
    private readonly string[] _paymentTypes;

    public AccountFilter(string[] paymentTypes)
    {
        _paymentTypes = paymentTypes;
    }

    public void AddPaymentOptions(ShoppingCart cart)
    {
        IPayment payment = ProfileManager.FindPaymentByType(_paymentTypes);

        if ((payment != null) &&
            (! string.IsNullOrEmpty(cart.AccountNumber)))
        {
            cart.PaymentFilters.Add(payment);
        }
    }

}

The business has a new requirement for us: Payment types should not be filtered using the user's profile for existing quotes.  In our system, customers can save their cart as a "quote", so that the unit cost information can be "locked" for a set amount of time.  Enabling this change requires us to change the AccountFilter class.

Being a legacy code system, the AccountFilter class has no tests defined on it.  It's our goal to make the change and add tests to verify the new requirements, and that's all.  We're not trying to add 100% unit test coverage for the AccountFilter class, just to test the changes we're making.

Not knowing if I can unit test this class, the easiest way to see if it's testable is to try using it in a test.  Here's the behavior I want, in a unit test:

[TestMethod]
public void Should_not_add_filter_when_basket_is_a_quote()
{
    ShoppingCart cart = new ShoppingCart();
    cart.IsQuote = true;
    cart.AccountNumber = "123ABCD";

    AccountFilter filter = new AccountFilter(new string[] { "CRD", "INV" });
    filter.AddPaymentOptions(cart);

    Assert.AreEqual(0, cart.PaymentFilters.Count);
}

I try running this test, and it fails spectacularly.  The error I get is along the lines of "HttpContext.Current is null", so something is dependent on the ASP.NET runtime being up.

Looking at the AddPaymentOptions method of the AccountFilter class, I notice the call to the static method FindPaymentByType on the ProfileManager class.  That method is extremely hideous, with hard dependencies on HttpContext.Current, a web service, and a database.  Instead of focusing on breaking those dependencies out, I can apply Inline Static Class and break out AddPaymentOptions' dependency on ProfileManager.  The end goal is to break the dependencies of the class I'm trying to test, not all the sub-sub-sub dependencies that may be around.

Step 1:

ProfileManager is a huge class, with about 100 different methods.  The only one I'm interested in is the FindPaymentByType method, so I'll create an IProfileManager interface that includes only that method, matching signature and name.

public interface IProfileManager
{
    IPayment FindPaymentByType(string[] paymentTypes);
}

Step 2:

Now that I have my IProfileManager, I'll make the static class implement the new interface.  Note that the static class itself isn't marked "static", only its methods.  Additionally, I'll need to explicitly implement that method so it doesn't conflict with the existing static method.

public class ProfileManager : IProfileManager
{
    public static IPayment FindPaymentByType(string[] paymentTypes)
    {
        // Hit web service for widget
        // Query database based on widget
        // Return database object
    }

    IPayment IProfileManager.FindPaymentByType(string[] paymentTypes)
    {
        return null;
    }
}

I compile, and everything looks good.  Notice I didn't add any access modifiers to the new IProfileManager method, and the inclusion of the interface name prefixed on the method name.  That's how I can explicitly implement the interface, and not get any compile errors even though these two methods have the exact same signature.

Step 3, 4:

Now I can move the existing implementation to the new interface method, and use Hide Delegate to change the static method to call the new interface method.  I don't want to introduce duplication and simply copy over the implementation, so Hide Delegate allows me to retain backwards compatibility and save a lot of work of changing all clients to use the new method.

public class ProfileManager : IProfileManager
{
    public static IPayment FindPaymentByType(string[] paymentTypes)
    {
        IProfileManager profileManager = new ProfileManager();
        return profileManager.FindPaymentByType(paymentTypes);
    }

    IPayment IProfileManager.FindPaymentByType(string[] paymentTypes)
    {
        // Hit web service for widget
        // Query database based on widget
        // Return database object
    }
}

Now my static method delegates to the instance methods, and no other clients are affected by this change.  Again, it's not my goal to fix every client dependency on this static method, only the one I'm changing.

I compile and test, but my initial unit test is failing as it's still calling the static method.

Step 6:

Now that I have an interface to depend upon instead of a static method, I'll change the client code to use the interface instead of the static method.  It looks much like the code inside the Hide Delegate of the static method:

public void AddPaymentOptions(ShoppingCart cart)
{
    IProfileManager profileManager = new ProfileManager();
    IPayment payment = profileManager.FindPaymentByType(_paymentTypes);

    if ((payment != null) &&
        (! string.IsNullOrEmpty(cart.AccountNumber)))
    {
        cart.PaymentFilters.Add(payment);
    }
}

I create a variable of type IProfileManager, instantiate it to a ProfileManager instance, and use the IProfileManager.FindPaymentByType method instead of the static method.  I compile and test, but my unit test still fails as I still haven't fully extracted the dependency on the ProfileManager, but that will be taken care of shortly.

Step 8:

I have two options, I can either pass in the IProfileManager as in the method call or I can pass it in through the constructor.  I prefer to pass dependencies through constructors over method parameters, because I like a clear separation between queries, commands, and dependencies.  Additionally, if I'm using an IoC container, I can wire up these dependencies a little more easily.  I'll use Preserve Signatures [Feathers 312] and Chain Constructors to keep the original constructor, as there are still quite a few clients of my AccountFilter class that I don't want to change:

public class AccountFilter
{
    private readonly string[] _paymentTypes;
    private readonly IProfileManager _profileManager;

    public AccountFilter(string[] paymentTypes)
        : this(paymentTypes, new ProfileManager()) { }

    public AccountFilter(string[] paymentTypes, IProfileManager profileManager)
    {
        _paymentTypes = paymentTypes;
        _profileManager = profileManager;
    }

    public void AddPaymentOptions(ShoppingCart cart)
    {
        IPayment payment = _profileManager.FindPaymentByType(_paymentTypes);

        if ((payment != null) &&
            (!string.IsNullOrEmpty(cart.AccountNumber)))
        {
            cart.PaymentFilters.Add(payment);
        }
    }
}

So I did a few things here:

  • Add a new constructor that took an IProfileManager, and created the backing field for it
  • Make the other constructor call the new one, passing in a new ProfileManager, which is what existing code would do
  • Make the AddPaymentOptions method use the field variable instead of a local variable

What this code allows me to do is pass in a different IProfileManager instance in my test methods to verify my behavior.  I don't really care what the ProfileManager does, all I care about is what it tells me.  Somewhere down the line, I have an integration test that puts the two together, but until then, my unit test will suffice.  Here's my final unit test, which fails for the reason I want it to fail, because the assertion fails:

[TestMethod]
public void Should_not_add_filter_when_basket_is_a_quote()
{
    ShoppingCart cart = new ShoppingCart();
    cart.IsQuote = true;
    cart.AccountNumber = "123ABCD";

    string[] paymentTypes = new string[] { "CRD", "INV" };

    MockRepository repo = new MockRepository();

    IProfileManager profileManager = repo.CreateMock<IProfileManager>();

    using (repo.Record())
    {
        Expect.Call(profileManager.FindPaymentByType(paymentTypes))
            .Return(null);
    }

    using (repo.Playback())
    {
        AccountFilter filter = new AccountFilter(paymentTypes, profileManager);
        filter.AddPaymentOptions(cart);
    }

    Assert.AreEqual(0, cart.PaymentFilters.Count);
}

I use RhinoMocks to pass in a mock IProfileManager, and test-driven development to add the functionality desired.  I broke the dependency of the class under test away from the ProfileManager static method, and was able to preserve existing functionality.  By preserving existing interfaces and functionality, I can eliminate much of the risk involved in changing legacy code.  I got my changes in, was able to test them, and kept out of the rabbit-hole that a larger refactoring would have thrown me into.

Smart Tag shortcut key

Ever notice that little red bar show up sometimes in while coding in Visual Studio 2005?  It shows up after making certain changes to code, such as renaming methods, fields, or types that need to be imported:

That little red bar (did you miss it?  Look closer!) brings up a helpful context menu that has some nice refactorings/editing commands.  Unfortunately, to get the menu to show up with your mouse, you have to bring the mouse down from above onto that little bar that's about 10 pixels wide.

That's extremely annoying, and I've often missed many times trying to hit that little bar.  Couple that with the fact I don't like to use the mouse while coding, and it's even more annoying.

Luckily, VS 2005 has a nice shortcut combination to bring that menu up: Ctrl+.  After that combo, the menu shows up:

Now you can just hit "Enter" and it performs the highlighted command.  For ReSharper users, "Ctrl+." is to Visual Studio as "Alt+Enter" is to ReSharper.  I'm probably the last person to know this shortcut, but it's nice for the few times I prefer Microsoft's helper menu over ReSharper's.

Some Domain-Driven Design resources

Eric Evans' book may be the definitive resource, but there's quite a lot of supporting information on print and on the web.  For those looking to start looking at what all the DDD buzz is about, or just wanted to catch the next acronym wave, here's a list of resources for domain-driven design:

Publications

Articles/Blogs/etc

Personally, I'd start with the InfoQ article, then flush out that with the rest of the books.  For most other questions, the Yahoo group is a great resource as are plain old Google searches.

Thursday, October 18, 2007

Myth of the isolated production fix

While in WCF training this week, I heard once again the argument why config files are great - your IT staff can change them without a recompilation.  Sounds great right?  But what exactly does this imply?

Sure, there's no recompilation, but do the changes get tested?  I can't imagine someone modifying the production environment without testing those changes first.  If something goes wrong, who's fault is it?  It's the application team's responsibility to ensure a tested, reliable application, but they're not the ones making this change.

How exactly does this change happen?  Does the IT staff manually edit the configuration files?  What happens if they make a mistake?

Flirting with disaster

Statements like that, "change without a recompilation", absolutely makes me cringe, as it usually implies that these changes are small, isolated, easy, and therefore don't need to be tested.

The problem is that making configuration changes can sometimes lead to unexpected behavior changes because something we didn't anticipate is dependent in some way on the configuration file.  Even if the development team is consulted about these changes, how can they say with confidence the change is small or isolated without actually testing those changes?

Every change, no matter how small, that could potentially affect the behavior of the system, needs to be tested in a variety of systems and environments.  Automated deployments to, and testing of, clones of production environments gives the team confidence in their changes.  Anything else is a wild and potentially hazardous guess.

A responsible SCM process

The first step in any reasonable SCM process is continuous integration.  Until the build is repeatable, automated, and tested, the team can't have much confidence in the reliability or quality of a build.  Even small production fixes should start at this phase, as there's never any guarantee a configuration change doesn't affect business logic without regression testing.

After a CI build and possibly a nightly deployment, we typically have a set of gated environments where builds are put through more and more tests until they are certified as production-ready.  Only builds labeled as production-ready are allowed to be promoted to the production environment.  Although every build in CI processes might be labeled "production-ready", sometimes longer regression tests need to occur before certifying a build ready for the next environment.

But why are small changes allowed to subvert this process?  The only time this process should be allowed to be subverted is in the event of a critical failure, and the business is losing money because of downtime.  If it takes three weeks for a build to move through the pipeline, that's three weeks where the business is losing money.  The only time we compromise on the quality of the build is when we absolutely have to push it out right away (i.e., in hours).

That's not to say testing doesn't happen, but it happens in a targeted area.  After the hotfix is pushed out, we still go through the gated promotion process, just to make sure we didn't miss anything.  We might even throw out the changes in source control for more sound fixes.  That hotfix is still considered suspect and isn't treated as production-ready, but temporary.

Refuse to compromise values

Hotfixes are a rare occurrence and must go through a promotion process themselves, where the severity of the bug must have a certain level of negative effect on business.  The temptation to push out lower severity bugs through hotfixes becomes higher when a few successful hotfix deployments occur.  The business asks "well, if it was that easy, why don't we do that all the time?".  This is just playing production roulette, and eventually it will catch up to you.

I don't feel terrible occasionally compromising on practices when the urgency of a hotfix requires it, but it's important for the team never to compromise on their core values.  When hotfix patches are demanded on a regular basis, the team should learn to say "no", push back, and volunteer a more responsible approach.

Monday, October 15, 2007

Ruby-style loops in C# 3.0

Ruby has a pretty interesting (and succinct) way of looping through a set of numbers:

5.times do |i|
  print i, " "
end

The results of executing this Ruby block is:

0 1 2 3 4

I really love the readability and conciseness of this syntax, just enough to see what's going on, but not a lot of extra stuff to get in the way.  In addition to the "times" method, there's also "upto" and "downto" methods for other looping scenarios.

Some slight of hand

With extension methods and lambda expressions and C# 3.0, this form of loop syntax is pretty easy to do.  Not really a great idea, but at least an example of what these new constructs in C# 3.0 can do.

So how can we create this syntax in C#?  The "times" method is straightforward, that can just be an extension method for ints:

public static void Times(this int count)

Now this method shows up in IntelliSense (when I add the appropriate "using" directive):

Now that the Times method shows up for ints, we can focus on the Do block.

Adding the loop behavior

Although I can't declare blocks in C# 3.0, lambda expressions are roughly equivalent.  To take advantage of lambda expressions, I want to give the behavior to the Times method in the form of a delegate.  By declaring a delegate parameter type on a method, I'm able to use lambda expressions when calling that method.

I didn't like passing the lambda directly to the "Times" method, so I created an interface to encapsulate a loop iteration, faking the Ruby "do" block with methods:

public interface ILoopIterator
{
    void Do(Action action);
    void Do(Action<int> action);
}

Now I can return an ILoopIterator from the Times method instead of just "void".  Now the final part is to create an "ILoopIterator" implementation that will do the actual looping:

private class LoopIterator : ILoopIterator
{
    private readonly int _start, _end;

    public LoopIterator(int count)
    {
        _start = 0;
        _end = count - 1;
    }

    public LoopIterator(int start, int end)
    {
        _start = start;
        _end = end;
    }  

    public void Do(Action action)
    {
        for (int i = _start; i <= _end; i++)
        {
            action();
        }
    }

    public void Do(Action<int> action)
    {
        for (int i = _start; i <= _end; i++)
        {
            action(i);
        }
    }
}

public static ILoopIterator Times(this int count)
{
    return new LoopIterator(count);
}

I let the "LoopIterator" class encapsulate the behavior of performing the underlying "for" loop and calling back to the Action passed in as a lambda.  It makes more sense when you see some client code calling the Times method:

int sum = 0;
5.Times().Do( i => 
    sum += i
);
Assert.AreEqual(10, sum);

That looks pretty similar to the Ruby version (but not quite as nice), but it works.  Compare this to a normal loop in C#:

int sum = 0;
for (int i = 0; i < 5; i++)
{
    sum += i;
}
Assert.AreEqual(10, sum);

Although the "for" syntax is functional and about the same number of lines of code, the Ruby version is definitely more readable.  Adding additional UpTo and DownTo methods would be straightforward with additional ILoopIterator implementations.

Feature abuse

Yeah, I know this is more than a mild case of feature abuse, but it was interesting to see the differences between similar operations in C# 3.0 and Ruby.  Although it's possible to do these similar operations, with similar names, this example highlights how much the syntax elements of the static CLR languages can get in the way of a readable API, and how much Ruby stays out of the way.

Saturday, October 13, 2007

Fluent interface endgame

In a conversation on BDD on the altnetconf message board, the topic switched to language-oriented syntax in the CLR, to which Scott notes:

When IronRuby gets here, I think we should at least stop and consider the value to the community of encouraging them to try do language-oriented specification with tools and programming languages that don't quite hit the mark.

Or maybe we should be turning our focus to Boo or IronPython for achieving the solubility in specification code that we can't have in C# and VB.

I've felt this for quite a while now, though I have reservations about focusing on Boo's Specter or IronPython for the time being.  My response was:

I think there's an assumption among anyone developing language-oriented tools in CLR using extension methods, fluent interfaces, etc. that it's all a poor man's substitute for the clearer syntax that Ruby inherently provides. Once IronRuby hits some sort of beta/RC status, I don't see much point at all continuing to try to wrestle a pig into a yoke while the farmers laugh "use an ox you moron".

I do like the readability the Boo macros provide, but I'm always reminded of that scene from King of the Hill when Hank meets his new neighbor Khan (who is from Laos) for the first time:

Hank: So are you Chinese or Japanese?
Khan: No, we are Laotian.
Bill: The ocean? What ocean?
Khan: From Laos, stupid! It's a landlocked country in Southeast Asia
between Vietnam and Thailand, population approximately 4.7 million!
Hank: ... So are you Chinese or Japanese?
Kahn: D'oh

As someone who wants to use BDD more, Specter looks like a promising CLR tool, but as a proponent of BDD, Boo might set the bar too high when there's only Chinese or Japanese in the greater community. Then again, if curly-braces dilute a concept that is supposed to lie very close to the conversations BDD starts with, maybe it's better to draw a line so the value isn't diminished.

I don't really have any answers yet. Make it pure or make it accessible, tough choice...

Again, I don't have anything against Boo, but I just don't know how successful these efforts will be once IronRuby comes out.  For example, why should I care about the Boo Build System when I can use Rake in Visual Studio when IronRuby ships?

Until these languages are officially supported by Microsoft, they are for the most part eliminated as an option for one reason or another, whether it be company policy, lack of adoption, or just apathy for learning a new language.  Once they are supported, I will happily ditch my fluent interface and extension methods for a true natural-language friendly programming language.

Friday, October 12, 2007

Double-edged sword of InternalsVisibleTo

I've had some conversations with both Joe and Elton lately about the InternalsVisibleTo attribute.  From the documentation, the assembly-level InternalsVisibleTo attribute:

Specifies that all nonpublic types in an assembly are visible to another assembly.

This attribute was introduced in C# 2.0, and allows you to specify other assemblies that can see all types and members marked "internal".  In practice, all assemblies are signed with the same public key, and you'd specify that the unit test assembly can see that assembly.

  • MyProject.Core -> has the "InternalsVisibleTo" attribute defined in the AssemblyInfo.cs file, pointing at the below assembly
  • MyProject.Core.Tests -> is signed with the same public key as the Core assembly

Notice that the "Core" project knows about the "Tests" project, but the actual project dependency is the other way around.  It's definitely better than using reflection to access private members for testing, but there are some definite pros and cons with this approach.

Pros

  • Allows your test libraries to access internal classes and methods for additional testing and coverage
  • Keeps your public API limited to what you want to publish
  • Provides greater flexibility for internal refactoring and backwards compatibility
  • Reduces the surface area of your public API

Cons

  • Easily abused, so things usually marked "private" are now marked "internal"
  • Potential loss of encapsulation
  • Decision about what should be public could be wrong
  • Essentially two levels of "public" visibility that have to be managed
  • Enforces bi-directional dependencies between assemblies

Personally, I always felt like marking something "internal" was cheating just a little bit, and I have trouble deciding when to make something internal or not.  But unless you're delivering a public, published and documented API as part of your product, using the "InternalsVisibleTo" attribute would probably be overkill.

However, if you are delivering an API, you should consider using this attribute to keep a high level of coverage and reduce the surface area the API for your customers.  You could try starting by making everything "internal", then shape the public API based on specific use cases.

Dialing up quality

Quality is not a light switch, it can't be flipped on overnight, or even in six months.  Although the term "quality" differs from person to person, I rather like James Shore's description of Quality With a Name.  He defines quality as having a great design, and that a great design is easy to change.

Furthermore, he concludes:

Great designs:

  • Are easily modified by the people who most frequently work within them,
  • Easily support unexpected changes,
  • Are easy to modify and maintain,
  • and Prove their value by becoming steadily easier to modify over years of changes and upgrades. 

Now that your team agrees on what great design is, reality sinks in when they go back to the monolithic flying-spaghetti-code-monster system and see how far away they are from great design.

Great design and high quality can be achieved, but it takes many individual victories to reach the final goal.  A team can dial up quality, one principle/practice at a time, until the team and the system are one smoothly running machine.

Low hanging fruit

So how do we decide where to begin?  The easiest is the low hanging fruit, pain points that affect developers every day.  Here's some common pain points/smells I run in to:

  • Everyone is afraid to get the latest version
  • "It works on my machine" is your team's mantra
  • Every source code file has at least 3 different coding styles
  • It takes forever to reproduce a defect locally
  • No one knows if the defect is still fixed next week, month, or six months from now

And the list can go on and on.  Once pain points are found, take the top pain point and set a goal to tackle it in the next week or month.  Then schedule a brown-bag lunch to show the problem, how you tackled it, and the principles that led you to these actions.

Show the value

When I want to correct some of these quality issues, I can't tackle the issue in one fell swoop.  Often new ideas have to be introduced a little at a time, where I prove the value as I go.

I can't force unit tests or pair programming on a team if no one sees the value.  If I try it out, show the value, then team members will start to come onboard.

A plan of action might be:

  1. Automate the build locally
  2. Automate the build of the latest version on a build server
  3. Remove all warnings
  4. Add some automated tests to reproduce a defect
  5. Get these automated tests running with the nightly build
  6. Introduce unit tests on new code, running with the nightly build
  7. Introduce some key FxCop rules where we're seeing the most varying coding styles
  8. Gradually introduce other FxCop rules to hit other pain points

Once initial buy-in for values like "feedback" and "maintainability" happens in your team, it's that much easier to introduce other practices that enforce those key values.  Perfect quality is never achieved, but as my dentist told me, excellence is the pursuit of perfection.

Baby steps to excellence

It's easy to get discouraged trying to get 100% test coverage on a legacy app.  A better intermediate step is to get 80% coverage on the class or classes that have the most defects.  Lessons learned from that experience can then be applied to the rest of the app.  But if we tried to tackle the entire system all at once, we wouldn't get far enough to realize the value, and the team would discard the value as impossible to achieve.

By taking small steps toward realizing our goals, dialing up quality one step at a time, we can create a pattern of success take the system to a level of quality that otherwise would not have been imagined to be possible.

Thursday, October 11, 2007

ALT.NET-itis

Has anyone else that attended ALT.NET feel under the weather this week?  I was fine before the conference, then felt like crap afterwards.

I have a feeling that it was because of the germ-sponge getting passed around at the closing session:

Martin holding the orange plague-dough

It was orange at the beginning, but quickly took a brown hue as it went around the room.  I'll bring some rubber gloves next time...

Wednesday, October 10, 2007

Subscribing to your Google Calendar in Outlook 2007

I've been in love with Google Calendar for a long time now.  I have several calendars I'm viewing at the same time, including my personal calendar, UT football schedule, and a few others.  I always hated having to view two calendars at work, one for my work schedule and one for my Google calendars.

Scott Hanselman mentioned that he views multiple external calendars inside Outlook 2007.  Several of the external calendars are Google calendars.  Outlook 2007 supports any ICAL source, which most online calendar applications provide.  It's easy to set up, and the instructions can be found here:

Subscribe to your Google Calendar 

Making your Outlook calendar show up in Google calendar is a bit more tricky, and I haven't found anything that does actual synchronization and not just import/export.

Now I have all of my calendars in one convenient location, with each calendar color coded from their source.  However, I found I have to think of more creative excuses on why I forget things my wife's calendar...

Tuesday, October 9, 2007

Are Story Runners appropriate?

Scott recently voiced his opinion on the validity of story runners (i.e. the xBehave tools) in an agile shop.  First, let me say that I sincerely appreciate the passion Scott has for BDD, and it's that passion that will drive the community forward.  Second, in recognizing that passion, I won't respond to comments about technology fetishes and code-sturbation for now, but I do honestly understand the concerns behind them.

Scott sees a tool in search of a problem, and in completely disagreeing with the problem it's trying to solve, questions the motivation of those developing the tool, which he sees as seemingly selfish and egotistical reasons.  From that point of view, again, I completely understand Scott's concerns, and luckily I have a Bellwarese translator to get to the heart of those concerns.

I did feel it unfortunate that the BDD conversation revolved around tools.  BDD is far too nascent to argue tooling, and instead the discussion should have focused getting the language, values, and concepts right.

So Scott's core question is:

Are Story Runners appropriate for executable requirements?

As an aside, not once in my experience have I had a BA come up to me and say "hey, while you're delivering this, it would be great if you had some automated tests for it too".  I've never had anyone from the business ask me to use any kind of automated testing tool, such as NUnit, FitNesse, NSpec, or NBehave.  These tools will always be pushed by geeks, as it is the geeks that see the value of these tools first.

For me personally, capturing stories in code was never about traceability.  Specifically, the tangible benefits Joe and I saw were:

  • Providing a more complete description of the behavior of the system
    • Unit tests too granular
    • Even specifications can be difficult to organize
  • Stories provide a better overall, macro view of the system
  • Executable stories remind us of our final goal
    • For us, the final goal isn't the unit tests, but that the story is satisfied.
    • Capturing in code helps direct us towards that final goal
  • Executable stories can help capture the conversation
  • Executable stories can help shape our domain model and infuse the ubiquitous language into the system

The more appropriate venue for our discussion was Jeremy's topic Sunday on executable requirements.  Stories are part of BDD, as we can all agree, but should stories be used as executable requirements?

The result from the discussion was "let's see, as nothing else has worked great so far".  Start with stories, end with stories.  Stories -> Scenarios or Aspects -> Specifications (NSpec) -> Unit tests (NUnit) -> Integration Tests (FitNesse/NUnit) -> Functional/Acceptance Tests (FitNesse/NBehave).

I do appreciate this dialogue, as only through debate with the truly passionate can clarity be achieved.

 

 

p.s. please don't suggest exploding laptops, we've had enough problems with that here...

Monday, October 8, 2007

ALT.NET Impressions

The ALT.NET Conference is over, and I'm exhausted.  I've never been to a conference that consistently challenged my assumptions about software development.  The amount of dialogue and debate that occurred was quite staggering, considering that most involved were birds of a feather, and the possibility of echo-chamber issues would seem to be fairly high.

Open Space Format - Day 1

The conference was run in an Open Space format, which I've never been involved with before.  No agenda was set before the first day, and no one was obligated to stay for an entire discussion.  The conference started with a few ground rules and an explanation into Open Spaces, along with a short roundtable about the direction and purpose of the ALT.NET conference.

Between five and seven chairs were placed in the middle of the room.  If you wanted to address the audience, you got up, sat in the chair, and said your piece.  If you sat in the last available chair, someone else in the panel sat down.  One chair was always available for someone to claim, and no one could talk in the audience.  This provided a great continuous discussion for which anyone in the room could participate.  Great questions and discussions centered around "What is ALT.NET?", "Is ALT.NET divisive?", "Is ALT.NET negative?", and many others.

After this initial dialog, we proceeded to create the agenda.  Those who wanted to talk about something wrote it down on one of the many post-it note pads scattered around the room, walking to the center of the room, and telling everyone what they wanted to convene about.  Then they would put their post-it on an open space in the schedule at the front of the room.  A long line formed for those proposing discussions next, and each put their post-its in an open slot in the schedule.  When the spaces were filled, these post-its were put below where they could be sorted later.  After all those that wanted to convene a discussion were through, all audience members were invited up to the schedule to initial post-it notes they found interesting.  There around 6 or so rooms to convene discussions during each session, and all were filled by the end of the night.

Quite frankly, it was a astonishing that we went from no agenda to a very compelling one in about an hour with little or no planning or contention.  But then again, this was my first Open Space conference.

Day 2

The next morning, the schedule had gone through an iteration where many topics were consolidated, some were moved and some were removed based on the voting the previous night.  Scott Guthrie's MVC presentation was pushed back to accommodate the most audience members.  No one needed to commit to any one topic per session, as they could use their own two feet to go to a different discussion.

First Session - Spreading passion and DSL's

During the first session, I mostly attended a discussion concerning the problems of spreading passion and learning throughout the organization.  Jean-Paul had some insight on inspiring passion, while Scott Hanselman mostly talked about making ALT.NET tools accessible.  Dave Laribee talked about the importance of instilling values, but there was still a lot of negative vibes around the Morts.  Mort's shouldn't be shunned or disdained, they should be embraced and included.  The rest of the discussion concerned how to include Mort in the ALT.NET community.

Next, I headed to the DSL conversation that had folks like Martin Fowler, Scott Guthrie, Jeremy Miller and others.  I only caught the tail end of the conversation, but Martin talked a little bit about the difficulties of fluent interfaces, the reasons behind them (no one knows yacc), and Scott mentioned that debugger visualizers can work around many of the pains of fluent interfaces.  Fluent interfaces (as I ran into with NBehave) run into the issue where a continuous chain of method calls, no matter how you format it in the code, still turns into one line of executing code.  If something fails and throws an exception, you don't know what line it happens at and you can't step into a specific method call.  You have to step in and out of each fluent interface method call to get to whatever problem you have somewhere down the chain.

Martin again pointed out that this wouldn't really be an issue if we just used true external DSLs and used tools like yacc or Boo to create them.  Once you do that, however, you do lose much of the debugging support from Visual Studio.

Second Session - Intro to Boo and the BDD gauntlet

I hadn't used a lot of Boo, and was very interested in the parsing and extension platform Boo provides.  I've seen some work done on a Boo build system (BooBS, and don't search for that one).  That looked very intriguing to get rid a lot of the executable XML we have around in NAnt, MSBuild, XAML, Spring, and so on.  Unfortunately, the discussion mostly covered the language features of Boo and not some of the more interesting core features of the Boo compiler, so I moved on to another session.

One of the newer topics in our space was BDD, which Scott Bellware convened a session on.  However, when I walked into the room (with about 80 or so people in it), I saw Joe Ocampo in the center of the room and NBehave story definitions up on the projectors.  Joe was taking a lot of heat from those in the audience questioning whether stories were BDD, whether executable stories were a good idea, and so on.  I felt pretty rotten about the situation, as I was not around to help Joe explain our goals with NBehave.  But I bought him a couple of beers that night, so I guess it evened out.  I understand and even embrace Scott's (Bellware) concerns that executable stories can hamper the conversation that a story is supposed to represent.  But Hanselman came to the rescue to explain to Scott that these types of requirements and traceability are critical to many people, and those concerns shouldn't be dismissed out of hand.  Personally, I think if NBehave doesn't provide value, it doesn't have a reason to exist.  However, what's valuable varies widely from team to team.

I would have rather this discussion concern BDD than NBehave, since BDD is a new an interesting topic that isn't widely understood.

Again, sorry Joe, my bad!

Third Session - Scott Guthrie is an MVC ninja

The third session mostly consisted of Scott's presentation on a new MVC framework in development at Microsoft, due to be released as a beta in the next couple of weeks.  Much has already been discussed about the specific architecture, so I won't go into that.  My specific impressions were:

  • I've never EVER heard "Rhino Mocks", "StructureMap", "NUnit", "Spring", or any other major OSS tool that the community uses mentioned in a presentation about MS technology.
  • I've never EVER seen an MS presentation actually USE one of these tools.  Scott used NUnit to write tests for the controller before he ever created the view, which made me shed a single tear.
  • The first points always brought up were support for swappable, testable, mockable architectures, which the new MVC framework supports inside an out.  There wasn't a scenario that was brought up that wasn't already available.
  • For the first time in .NET, it seemed that MS listened, engaged, and adapted to feedback from the community to influence its decisions and core architecture of the framework.  Testability, mockability, and embracing other OSS alternatives were first-class citizens in this brave new paradigm.  Bravo!

Fourth session - Hanselman knows IronPython (kinda) plus some DDD-jitsu

The next session mostly consisted of Scott Hanselman showing IronPython and IronRuby running on the MVC framework.  Questions kept coming up about "will MS run RoR?", but the answer was always "if the community wants that, they can do it, the tools are there."  Scott showed IronRuby on ASP.NET MVC, not RoR ... on CLR ... on IIS.  It was all very interesting how he was able to get IronPython running on the MVC framework, and he showed us that it didn't really take that much to get it up and running.  Most of the talk was about the differences between the dynamic type model of the DLR and the static Type object of the CLR, and how to resolve the two.  In the MVC framework, you specify that "I want to execute this view of Floogle type", but Floogle type doesn't exist yet since it's a dynamic type.

I popped into a talk about MbUnit and xUnit.NET, got bored, and checked out another on DDD-jitsu.  The DDD talk was interesting, as the core concepts were laid out and much discussion talked about how to introduce and implement these ideas.  Dave Laribee stressed the importance of the later chapters talking about large-scale structures and bounded contexts.  This pretty much ended the day

Later activities - Hanselman is a rock-jockey

I went to dinner at Chuy's (love that Tex-Mex) with about 20 folks including Jeffrey Palermo, Martin Fowler, Scott Guthrie, Scott Hanselman, about 10 other people.  It was rather strange to eat next to the voice of Hanselminutes, which I've listened to for so long.  Scott Hanselman gave me a great tip on mapping directories to shares, to make my life easier with Windows Home Server.  Scott doesn't seem to like cats too much either...  I also realized how much diabetes can affect someone's daily life as Scott had to stop what he was doing and self-administer an insulin injection at the end of the meal.  That was a good time for Scott to mention his Diabetes Walk 2007.  It was amazing to me that Scott can be as successful as he is, even though this disease consumes so much of his daily life.

After dinner, most of the group headed to Main Event, which is similar to Dave and Busters or Jillian's.  Main Event has rock climbing, which all of us were eager to prove our mettle on.  It was cool to see Scott Hanselman coach several of us up an intermediate course and ring the bell at the top.  What was even weirder was it was only Scott's first or second time to try rock climbing, and he was a complete natural.  The rest of the night was spent at the pool tables with a few beers and a lot of great conversation.

It was a very long day, compounded with the fact that most of it was spent learning, debating, absorbing, arguing, and just trying to keep up.  It proved very hard to sleep with so many conversations still buzzing around my head.

Day 3

The final day started with some breakfast and a meeting outside to remind us all of the rules.  Outside was a good idea, because no one looked very alert.

First Session - Bringing ALT.NET to the masses

This one was more of a continuation from the previous day's "passion" talk, but with some folks from the MSDN magazine and architecture journal from MS.  It was another great discussion with lots of ideas to bring ALT.NET to the masses, including:

  • Expanding the ALT.NET website to include a wiki, videos, and more introductory material with links to more detailed information (mostly to Ayende's blog)
  • Creating demos or a starter kit that installed NHibernate, NUnit, CC.NET, and the full OSS stack locally.  Basically, rewrite the PetStore starter kit to use ALT.NET ideas to provide a nice introduction and example application
  • Aligning the PnP group with the other events.  We all know that agile stuff is happening inside MS, but we never hear about it at the MSDN events.

There were several other ideas, but I realized afterwards that a nice bound notebook is a great note-taking tool, better than "maybe I'll remember".

Second Session - Executable requirements nirvana - StoryTeller and NBehave

This talk started with Jeremy Miller discussing the problems and difficulties of FitNesse, such as:

  • Great for tabular data-driven tests, but not great for complex models
  • Difficult to integrate with source control
  • Breaks often with refactorings, as everything is string-based (similar to the problems with NMock)

He showed StoryTeller, along with some tests.  It made running and authoring FitNesse integration tests much easier, but still had the issues that FitNesse has.

Next, Joe and I gave a demo of NBehave, as well as a good discussion on where its value is.  Mainly:

  • It gets the language right
  • It integrates well with automation
  • It runs complex objects well
  • It has a very grokkable interface

However, there were some shortcomings in that NBehave doesn't do well with tabular data, and much has to be copied and pasted to do so.  Jeremy suggested an integration with StoryTeller and NBehave, or FitNesse and NBehave to allow users to run story-based integration and acceptance tests.  At one point, Joe thought it was a good idea to show the code behind NBehave.  It was too late to take the beers back that I had bought him the night before.

Closing the space

At the end of the conference, everyone gathered in the main room, and all had a chance to give share their final thoughts.  It was an amazing experience, and even Scott Bellware got a little choked up at the end.  We went to lunch afterwards at Saltgrass Steakhouse, where Roy Osherove and Rod Paddock shared a story about a dirty joke, which we found could be applied universally to any situation.  The story itself is NSFW, so I'll just have to relate it later.

Final thoughts

This was by far the best conference I've attended, I can't wait for the next one.  I left excited and energized, glad to be around so many people with so much passion.  The conversations were always engaging and inviting, and never closed or exclusive.  Any one could jump in to any talk at any time, and challenge any assumption of the discussion.  I think the ALT.NET community (if that name does indeed stick) proved that there is a vibrant community in .NET that can provide conversation and feedback to MS, and not just a lot of external, negative noise.  While I mentioned a lot of names here, I wasn't trying to name-drop but to show that this community is accessible and eager to engage with others.  This was an important first step in creating a healthier, more inclusive community, and I hope it isn't the last step.

Friday, October 5, 2007

Compiler warnings from generated code

Although I believe strongly in treating warnings as errors, on rare occasions I get compiler warnings from generated code.  Examples of generated code include the designer code files for Windows and Web forms, XAML, etc.  Warnings in those files are easily removed, as it's almost always related to files coded by the programmer.

I recently hit a really strange compiler warning while using the aspnet_compiler tool, which compiles the ASPX, ASCX, and other content.  Part of this process is to parse the ASPX and ASCX files and create C# files from those.  However, I started getting very strange warnings from the precompilation:

c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\
  Temporary ASP.NET Files\ecommstore\5c1cb822\
  6aeabbae\App_Web_d0wxlov2.10.cs(87):
  warning CS0108: Ecomm.Login.Profile' hides inherited member 'Foundation.Core.Web.PageBase.Profile'. 
  Use the new keyword if hiding was intended.

Shadowing warnings aren't new to me, but this one is especially strange since it came from ASP.NET.  Specifically, a property created in an auto-generated class conflicts with a property in a global base Page class we use for all of our ASP.NET pages.  The base Page class is created by another team here, which provides all sorts of logging, etc.  I can't touch that one, nor would I want to, as it is used company-wide.  That base Profile property isn't virtual either.

But how can I get rid of the other one?  I tried all sorts of magic:

  • Shadowing and exposing as virtual (the subclass and override method)
  • Shadowing and preventing overriding by using the new and sealed modifiers on that property

Nothing worked.  No matter what, the Profile property would get created.  What does that code file actually look like?  Here's the more interesting part:

public partial class Login : System.Web.SessionState.IRequiresSessionState {        
    
    #line 12 "E:\dev\ecommstore\build\Debug\ecommstore\Login.aspx"
    protected global::System.Web.UI.WebControls.PlaceHolder mainPlaceholder;
    
    #line default
    #line hidden
    
    protected System.Web.Profile.DefaultProfile Profile {
        get {
            return ((System.Web.Profile.DefaultProfile)(this.Context.Profile));
        }
    }
    
    protected ASP.global_asax ApplicationInstance {
        get {
            return ((ASP.global_asax)(this.Context.ApplicationInstance));
        }
    }
}

There's the offender, the auto-generated Profile property.  Looking back at some older build logs, I notice that this warning didn't show up until we migrated to ASP.NET 2.0.  One of the new features of ASP.NET 2.0 is the Profile properties.  ASP.NET 2.0 profiles allow me to create strongly-typed custom profiles for customers and let them be integrated into our ASP.NET pages, be automatically stored and retrieved, etc.  If I defined custom properties for my profile, then a dynamically created Profile class would be used (instead of the DefaultProfile type).

However, we don't use Profiles, so I can just turn them off in our web.config file:

<profile enabled="false" />

By turning Profiles off, the Profile property is never created in the auto-generated code.  The warnings go away, and our problem is solved.

There are several other instances of these new ASP.NET 2.0 features in auto-generated code causing naming collisions with existing properties when migrating from ASP.NET 1.1.  The solution was always to rename your properties, but since I couldn't do that, turning Profiles off did the trick.

Thursday, October 4, 2007

Treat warnings as errors

Compiler warnings can provide some additional insight and quality controls on your codebase.  They can tell you about obsolete code, unused variables, and many other items that you wouldn't necessarily see on visual inspection.  Warnings can also surface bugs, such as possible null reference exceptions, or expressions that always evaluate to "true".

However, compiler warnings can be easily ignored.  Ignore them for long enough, and important warnings can be lost in a sea of "acceptable" warnings.  For higher quality code, you can treat warnings as errors.  When a compile time warning is found, the compilation fails.  Warnings also have varying severity, from "0" (basically off) to "4", which includes all warnings.  Set your warning levels to "4" to get the most mileage.

To turn on "Treat warnings as errors", go to the property pages for your a project and click the "Build" section.  In the section "Treat warnings as errors", set it to "All" and set the warning level to "4":

If you use the aspnet_compiler tool or Web Deployment Projects, you can also turn on "Treat warnings as errors" for these pre-compilation steps in the web.config file (.NET 2.0 in this example):

  <system.codedom>

    <compilers>

      <compiler

        language="c#;cs;csharp"

        extension=".cs"

        type="Microsoft.CSharp.CSharpCodeProvider, System,

          Version=2.0.3600.0, Culture=neutral,

          PublicKeyToken=b77a5c561934e089"

        compilerOptions="/warnaserror"

        warningLevel="4"

      />

    </compilers>

  </system.codedom>

The compilers section allows me to fine tune my ASP.NET compilation options, including warning levels and treating warnings as errors.

By treating warnings as errors, I can start dialing up the quality of our code a little at a time, producing a more maintainable codebase.

Wednesday, October 3, 2007

Daily routine with continuous integration

I chuckled quite a bit after reading the Top 5 Signs of Discontinuous Integration, though I think "dysfunctional integration" is a better word.  So what's my routine?

Start of the day

  1. Check if build is green
  2. Get latest if it is
  3. Fix build if it isn't

Coding

  1. Code/write tests
  2. Run local build, make sure code compiles and tests pass
  3. Get latest
  4. Run local build again
  5. Check in, with decent comments
  6. Wait for CI build to finish
  7. If build is red, drop everything and fix
  8. Otherwise, go back to step 1

I'll get latest several times per day.  The more often I integrate (the "continuous" part), the easier it is do so.  If you're scared to get latest version when the build is green, you probably have some dysfunctional integration issues.  If you don't know what "the build is green" means, well, that's a whole other ball of wax.

Tuesday, October 2, 2007

The Legacy Code Dilemma and compiler warnings

I hit the Legacy Code Dilemma today while trying to reduce compiler warnings in our solution.  For those that don't know it, the Legacy Code Dilemma is:

I need to fix some legacy code, but the fix isn't ideal.

After working on green-field development for a while, dealing with legacy code can be frustrating at first, unless you go through Michael Feathers' excellent Legacy Code book.  Whenever I feel the need to spend two days refactoring a small area to get it under test, I have to remind myself that it's more important to deliver business value AND pay down technical debt than paying down technical debt alone.  But then you hit dilemmas, where you KNOW the solution you're putting in is ugly, but just don't have the resources to "do it right".

To fix or not to fix

I hit that today, where I was trying to reduce compiler warnings.  I could fix the compiler warnings by making them go away, or fix the design problems that made the compiler warnings show up in the first place.

For example, I encountered a lot of "variable declared but not used":

try
{
    return PricingDatabase.GetPrice(itemId);
}
catch (Exception ex)
{
    return 0.0M;
}

In this case, the specific warning is "The variable 'ex' is declared but never used".  The problem is the variable "ex" is declared but never referenced in the "catch" block.  The design flaw, at least one of them, is that exception handling is used for flow control.  Any time you see try-catch block swallow the exception and do something else, it's a serious code smell.

The problem about fixing the design issue is that I'm dealing with legacy code, which has no tests.  Such design changes are foolish and even dangerous without proper automated tests in place.

Live to fight another day

If I wanted to fix the design flaw, I was in for a real battle.  I would need to get this module under test, and get any client modules under test.  For a codebase with little or no unit tests in place, that's just not feasible.

Instead, I'll fix the warning:

try
{
    return PricingDatabase.GetPrice(itemId);
}
catch (Exception)
{
    return 0.0M;
}

I kept the exception swallowing and all its nastiness, but gained a small victory by removing a compiler warning.  Since this solution has several hundred compiler warnings, all these little victories will pay off.  Instead of charging off on a path of self-righteous refactoring, I've fixed the pain and lived to tell the tale.

Monday, October 1, 2007

Setting off a CC.NET build from NAnt

One of the new features in CruiseControl.NET are integration queues with priorities.  By default, all projects can execute concurrently, but sometimes, I want to have a master build execute first, then dependent builds next.  This is fairly easy to do, using the integration queues, priorities, and force build publishers.

However, I have a strange situation where I actually need to force a build from a NAnt script (don't ask).  There aren't any command-line tools to execute CCNET builds, so I couldn't use the <exec> task to accomplish this.  Instead, I'll create a custom NAnt task.  To do this, I'll need to create a new C# project and reference three key assemblies:

  • NAnt.Core.dll
  • ThoughtWorks.CruiseControl.Core.dll
  • ThoughtWorks.CruiseControl.Remote.dll

From there, it's just a matter of creating a custom task to call into the CCNET API:

[TaskName("ccnet")]
public class CCNet : Task
{
    private string _server;
    private int _portNumber = 21234;
    private string _project;

    [TaskAttribute("server", Required = true)]
    public string Server
    {
        get { return _server; }
        set { _server = value; }
    }

    [TaskAttribute("portnumber", Required = false)]
    public int PortNumber
    {
        get { return _portNumber; }
        set { _portNumber = value; }
    }

    [TaskAttribute("project", Required = true)]
    public string ProjectName
    {
        get { return _project; }
        set { _project = value; }
    }

    protected override void ExecuteTask()
    {
        RemoteCruiseManagerFactory factory = new RemoteCruiseManagerFactory();
        string url = string.Format("tcp://{0}:{1}/CruiseManager.rem", Server, PortNumber);
        ICruiseManager mgr = factory.GetCruiseManager(url);

        string proj = mgr.GetProject(ProjectName);

        mgr.ForceBuild(ProjectName);
    }
}

The custom NAnt task needs at least two pieces of information to connect to the CCNET server:

  • Server name
  • Project name

If the remoting port is different than the default (21234), I'll need to specify that as well.  Now I can execute a CCNET build with this simple NAnt task:

<ccnet server="buildserver" project="Ecommerce CI" />

I'd still rather use the integration queues, priorities, and publishers, but the NAnt task will work for me in a more complex scenario.