Showing posts with label Rant. Show all posts
Showing posts with label Rant. Show all posts

Thursday, January 10, 2008

Stop the Flash insanity

More and more it seems high-profile websites are using Flash as a mechanism to deliver essential content.  In extreme cases, such as mycokerewards, the entire site is built on Flash.  Ads in websites, which you used to be able to ignore, now use Flash to replace the entire screen contents, screaming at you to "GO SEE CLOVERFIELD!!!".

My dev machine is a fairly hefty beast, but it still has a hard time processing Flash-only sites:

I'm not even doing anything in the Flash-only site except that I'm looking at it.  It's doing some ridiculous, pointless animation of bubbles floating around, and that requires 40% of my dual-core machine's resources.  When I look at this site on a single-core machine, I pretty much can't use FireFox any more, as it's completely consumed with those floating bubbles.

Sites that used to be relatively easy to get around are now just annoying, like ESPN.com, which are starting to rely heavily on Flash to deliver actual content.  Please don't start playing some highlights video if I'm just going to your homepage, I really don't like Stuart Scott screaming "BOOYAH" to me through my speakers.

If anything, Flash should be use to complement content, but not be the actual content.  To deal with normal annoyances, I go back and forth between these two FireFox add-ons:

  • Adblock Plus (blocks ad content, but not other Flash content)
  • Flashblock (blocks ALL Flash content, letting you opt-in to anything you want to see)

Flash for delivering ad content is perfectly fine, as long as it's non-intrusive and non-resource intensive.  Flash for delivering site content is just plain heinous, and I hope Santa delivers coal in those perpetrators' stockings next year.

Thursday, December 20, 2007

Upgrading to Windows XP SP2

After months of soul-searching, I made the gut-wrenching decision today to upgrade my home PC to Windows XP SP2.

Upgrade from Vista, that is.

I'm completely convinced that Vista is not designed to run on single-core/processor machines.  I've run Vista on work machines without any hiccups, with Aero Glass going full on.  I thought I had a semi-decent home PC:

  • AMD Athlon XP 2800+
  • 2 GB RAM

Alas, it was not enough to net me more than about 2.9 on the Windows Experience Index.  UAC annoys the hell out of me, most file operations take forever, I'm denied access to do simple operations, like creating a folder on my D: drive.  At work, I'll turn all of these safety features off, as I'm okay running with scissors in a development environment.  I have no idea how a home user deals with all of it, I sure couldn't.  Hopefully Vista's SP1 will fix these issues.

Tuesday, December 18, 2007

Dead Google Calendar gadget

This morning I received an interesting yet disturbing message on the Google Calendar gadget on my iGoogle home page:

Great gadget that it was, I think I might be a little more discerning about what gadgets I put on the home page.  Word of warning, you probably don't want to google "donkey-punching", definitely NSFW.  It looks like Google changed something, broke the gadget, and the gadget author decided to let everyone know, through an....interesting means.

Monday, December 3, 2007

Time is running out

I popped open Windows Live Writer today and got a fun message:

I thought this product was free, and I never paid for anything, so I'm a little confused how a free product can expire.  Live Writer isn't supported on Server 2003, which is what I use, so I have to jump through 80 or so hoops to get Live Writer installed on Server 2003.  Everything works perfectly fine, but now it seems I will be compelled to jump through the same hoops to upgrade to a version I don't need.  Fun times.

Tuesday, November 20, 2007

Stop the madness

I've been extending a legacy codebase lately to make it a bit more testable, and a few small, bad decisions have slowed my progress immensely.  One decision isn't bad in and of itself, but a small bad decision multiplied a hundred times leads to significant pain.  It's death by a thousand cuts, and it absolutely kills productivity.  Some of the small decisions I'm seeing are:

  • Protected/public instance fields
  • Inconsistent naming conventions
  • Try-Catch-Publish-Swallow
  • Downcasting

The pains of these bad decisions can wreak havoc when trying to add testability to legacy codebases.

Public/protected instance fields

One of the pillars of OOP is encapsulation, which allows clients to use an object's functionality without knowing the details behind it.  From FDG:

The principle states that data stored inside an object should be accessible only to that object.

Followed immediately by their guideline:

DO NOT provide instance fields that are public or protected.

It goes on to say that access to simple public properties are optimized by the JIT compiler, so there's no performance penalty in using better alternatives.  Here's an example of a protected field:

public class Address
{
    protected string zip;

    public string Zip
    {
        get { return zip; }
    }
}

public class FullAddress : Address
{
    private string zip4;

    public string Zip4 
    {
        get
        {
            if (Zip.Contains("-"))
            {
                zip4 = zip.Substring(zip.IndexOf("-") + 1);
                zip = zip.Substring(0, zip.IndexOf("-"));
            }
            return zip4;
        }
    }
}

There was an originally good reason to provide the derived FullAddress write access to the data in Address, but there are better ways to approach it.  Here's a better approach:

public class Address
{
    private string zip;

    public string Zip
    {
        get { return zip; }
        protected set { zip = value; }
    }
}

I've done two things here:

  • Added a protected setter to the Zip property
  • Changed the access level of the field to private

Functionally it's exactly the same for derived classes, but the design has greatly improved.  We should only declare private instance fields because:

  • Public/protected/internal violates encapsulation
  • When encapsulation is violated, refactoring becomes difficult as we're exposing the inner details
  • Adding a property later with the same name breaks backwards binary compatibility (i.e. clients are forced to recompile)
  • Interfaces don't allow you to declare fields, only properties and methods
  • C# 2.0 added the ability to declare separate visibility for individual getters and setters

There's no reason to have public/protected instance fields, so make all instance fields private.

Inconsistent naming conventions

Names of classes, interfaces, and members can convey a great deal of information to clients if used properly.  Here's a good example of inconsistent naming conventions:

public class Order
{
    public Address address { get; set; }
}

public class Quote : Order
{
    public void Process()
    {
        if (address == null)
            throw new InvalidOperationException("Address is null");
    }
}

When I'm down in the "Process" method, what is the "address" variable?  Is it a local variable?  Is it a private field?  Nope, it's a property.  Since it's declared camelCase instead of PascalCase, it led to confusion on the developer's part about what we were dealing with.  If it's local variable, which the name suggests, I might treat the value much differently than if it were a public property.

Deviations from FDG's naming conventions cause confusion.  When I'm using an .NET API that uses Java's camelCase conventions, it's just one more hoop I have to jump through.  In places where my team had public API's we were publishing, it wasn't even up for discussion whether or not we would follow the naming conventions Microsoft used in the .NET Framework.  It just happened, as any deviation from accepted convention leads to an inconsistent and negative user experience.

It's not worth the time to argue whether interfaces should be prefixed with an "I".  That was the accepted convention, so we followed it.  Consistent user experience is far more important than petty arguments on naming conventions.  If I developed in Java, I'd happily use camelCase, as it's the accepted convention.

Another item you may notice as there are no naming guidelines for instance fields.  This reinforces the notion that they should be declared private, and the only people who should care about the names are the developers of that class and the class itself.  In that case, just pick a convention, stick to it, and keep it consistent across your codebase so it becomes one less decision for developers.

Try Catch Publish Swallow

Exception handling can really wreck a system if not done properly.  I think developers might be scared of exceptions, given the number of useless try...catch blocks I've seen around.  Anders Heljsberg notes:

It is funny how people think that the important thing about exceptions is handling them. That is not the important thing about exceptions. In a well-written application there's a ratio of ten to one, in my opinion, of try finally to try catch. Or in C#, using statements, which are like try finally.

Here's an example of a useless try-catch:

public class OrderProcessor
{
    public void Process(Order order)
    {
        try
        {
            ((Quote)order).Process();
        }
        catch (Exception ex)
        {
            ExceptionManager.Publish(ex);
        }
    }
}

In here we have Try Catch Publish Swallow.  We put the try block around an area of code that might fail, and catch exceptions in case it does.  To handle the exception, we publish it through some means, and then nothing.  That's exception swallowing.

Here's a short list of problems with TCPS:

  • Exceptions shouldn't be used to make decisions
  • If there is an alternative to making decisions based on exceptions, use it (such as the "as" operator in the above code)
  • Exceptions are exceptional, and logging exceptions should be done at the highest layer of the application
  • Not re-throwing leads to bad user experience and bad maintainability, as we're now relying on exception logs to tell us our code is wrong

Another approach to the example might be:

public class OrderProcessor
{
    public void Process(Order order)
    {
        Quote quote = order as Quote;
        
        if (quote != null)
            quote.Process();
    }
}

The only problem this code will have is if "quote.Process()" throws an exception, and in that case, we'll let the appropriate layer deal with those issues.  Since I don't any resources to clean up, there's no need for a "try..finally".

Downcasting

I already wrote about this recently, but it's worth mentioning again.  I spent a great deal of time recently removing a boatload of downcasts from a codebase, and it made it even worse that the downcasts were pointless.  Nothing in client code was using any additional members in the derived class.  It turned out to be a large pointless hoop I needed to jump through to enable testing.

Regaining sanity

The problem with shortcuts and knowingly bad design decisions is that this behavior can become habitual, and the many indiscretions can add up to disaster.  I had a high school band director who taught me "Practice doesn't make perfect - perfect practice makes perfect."

By making good decisions every day, it becomes a habit.  Good habits practiced over time eventually become etched into your muscle memory so that it doesn't require any thought.  Until you run into a legacy codebase that is, and you realize how bad your old habits were.

Wednesday, November 14, 2007

The sinking ship

An interesting quote from a colleague today (paraphrased):

Moving developers to projects on the legacy system is like rearranging chairs on the Titanic.

We can tidy it up, but it's still going to the bottom of the Atlantic.  Probably better just to catch the plane.

It came up in the context of investing strategically versus buying tactically.  It's not an either-or question, but it's important to be thinking about long-term goals and assigning resources appropriately to match your strategic view.  The Technical Debt metaphor can offer guidance here too, as buying tactically tends to max out your credit cards (so to speak), while investing strategically actually nets returns.

Monday, September 24, 2007

SharePoint 2007 Wiki - not a fan

Now that I've written a couple large-ish wiki entries on our team's SharePoint 2007 wiki, I can reasonably say I'm not too impressed with the wiki offerings from MOSS 2007.  A few complaints so far:

  • No apparent wiki markup language
  • No documentation, other than one stock page that comes with the wiki
  • RSS feed for wiki only covers new items, not modifications to existing items
  • Only two editing options, WYSIWYG or straight-up HTML
  • WYSIWYG editor not very efficient and produces ugly, non-compliant, deprecated HTML
  • No auto-linking, back-linking, free-linking, etc.

Basically, most of the features I had grown to love in FlexWiki are not present.  My biggest beef is probably the lack of a wiki markup language.  The HTML output by the WYSIWYG is pretty terrible, as it's mostly deprecated HTML tags like FONT.  The whole point of a wiki markup language is to make it easy for non-technical folks to add entries.  When using WYSIWYG, styles become corrupted quite fast, as fonts and such are managed at the HTML level.

For example, let's say you want to have the following entry in a wiki:

Current Build Architecture

Local Builds

  • Solution-driven builds
  • IIS vdirs and web site created manually
  • Packaging steps done manually through a C++ post-build events project
  • Environment configuration done manually

Server Builds

  • Project-driven builds
  • MSI deployment
  • Custom scheduler service for daily and deployment builds
  • Uses a NAnt and an MSBuild build script file
  • Build scripts manually deployed to build server
  • Build scripts create workspaces, get sources, compile, create MSI's, and deploy

In MOSS 2007 Wiki, the above structure is possible, but it took a lot of cajoling with the WYSIWIG editor to get it right.  I expected the header text to use "Hxxx" html tags, and the HTML produced to look reasonable, so I could fine-tune it.  Instead, this is what I got:

<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2></FONT>&nbsp;</DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=3><STRONG>Current Architecture</STRONG></FONT></DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2></FONT>&nbsp;</DIV>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><STRONG><FONT size=2>Local Builds</FONT></STRONG></DIV>
<UL>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Solution-driven builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>IIS vdirs and web site created manually</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Packaging steps done manually through a C++ post-build events project</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Environment configuration done manually (i.e., SiteInfo guids)</FONT></DIV></LI></UL>
<P class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2><STRONG>Server Builds</STRONG></FONT></P>
<UL>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Project-driven builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>MSI deployment</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Dell Scheduler for daily and deployment builds</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Uses a NAnt and an MSBuild build script file</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Build scripts manually deployed to build server</FONT></DIV></LI>
<LI>
<DIV class=ExternalClassF7A8AEC3D2A943AE8A574B6CA3D14B2F><FONT size=2>Build scripts create workspaces, get sources, compile, create MSI's, and deploy</FONT></DIV></LI></UL>

This is not a joke.  Non-XTHML compliant markup in a product released in 2006 is unacceptable at this point.  Using deprecated HTML tags like "FONT" is even less acceptable, almost laughable.  I can't even read this markup, it's giving me a headache.

Here's the same markup in FlexWiki:

!Current Architecture

!!Local Builds
	* Solution-driven builds
	* IIS vdirs and web site created manually
	* Packaging steps done manually through a C++ post-build events project
	* Environment configuration done manually (i.e., SiteInfo guids)
	* Server Builds

!!Project-driven builds
	* MSI deployment
	* Dell Scheduler for daily and deployment builds
	* Uses a NAnt and an MSBuild build script file
	* Build scripts manually deployed to build server
	* Build scripts create workspaces, get sources, compile, create MSI's, and deploy

Now which markup is more maintainable?  Which one is easier to read?  Which one is easier to understand, edit, and change?

FlexWiki parses the markup to output HTML, and FlexWiki users don't have to worry about the HTML, only simple formatting rules.  MOSS 2007 wiki is a good first step in a wiki engine for SharePoint, but it's only a first step.  Be aware that its features pale in comparison to the more mature wiki engines, which have been around for many years and many versions.

Wizards and designers are useless to me if the code/markup they generate is not maintainable.  Also, why is it that tool consolidation means I have to give up a host of features?  Seems that instead of doing a few things well, MOSS 2007 does two dozen things not so well.  I'd rather shoot for integration over consolidation and let individual tools shine.  Although our CMS/blog/wiki tools are now consolidated on our team/org, I'm not entirely sure what exactly it bought us to lose our superior wiki and blog engines we used previously.

Wednesday, September 5, 2007

Short path to failure

In three easy steps:

  1. Separate those making decisions from those affected by the decisions
  2. Remove accountability from the decision makers for the decisions made
  3. Rinse, repeat

After going to the inaugural Agile Austin group meeting last night, I'm convinced more than ever that a siloed organizational structure forces the steps listed above.

Tuesday, June 19, 2007

The problem with code comments

Let me first state that I'm not proposing eliminating the use of code comments.  Code comments can be very helpful pointing a developer in the right direction when trying to change a complex or non-intuitive block of code.

Additionally, I'm also not referring to API documentation (like C# XML code comments) when I'm referring to code comments.  I'm talking about the little snippets of comments that some helpful coder gave to you as a gift to explain why this code is the way it is.  Code comments are easy to introduce, can be very helpful, so what's the big deal?

Code comments lie

Code comments cannot be tested to determine their accuracy.  I can't ask a code comment, "Are you still correct?  Are you lying to me?"  The comment may be correct, or it may not be, I don't really know unless I visually (and tediously) inspect the code for accuracy.

I can trust the comment and just assume that whatever it tells me is still correct.  But everyone knows the colloquialism about assuming, so chances are I'll be wrong to assume.  Who's to blame then, me or the author of the original comment?  It can be dangerous to assume that a piece of text that is neither executable nor testable is inherently correct.

Code comments are another form of duplication

This duplication is difficult to see unless I need to change the code the comment pertains to.  Now I have to make the change in two places, but one of the places is in comments.  If every complexity in code required comments, how much time would I need to spend keeping the original comments up to date?  I would assert that it takes as much time to update a code comment as it does to make a change on the code being commented.

Since the cost to maintain comments is high, they're simply not maintained.  They then fall into another pernicious category of duplication where the duplicate is stale and invalid.  When code comments are invalid, they actually hurt the next developer looking at the code because the comment may lie to the developer and cause them to introduce bugs, make the wrong changes, form invalid assumptions, etc.

Code comments are an opportunity cost

I think the real reason code comments aren't maintained is that most developers instinctively view them as an opportunity cost.  That is, spending time (and therefore money) to maintain code comments costs me in terms of not taking the opportunity to improve the code such that I wouldn't need the comments in the first place.  The benefits of making the code more soluble, testable, and consequently more maintainable are much more valuable than having up-to-date comments.

A worse side-effect is when developers use the time to update the comments to do nothing instead.  Call it apathy, ignorance, or just plain laziness, but more often than not the developer would rather leave the incorrect comments as-is and not worry about eliminating the need for comments.

Code comments are not testable

If I can't test a code comment, I can't verify it.  Untested or not testable code is by definition legacy code and not maintainable.  But code can be refactored and modified to be made testable and verifiable.  Code comments can't, so they will always remain not maintainable.  Putting processes in place to enforce code comments are up-to-date is not the answer since the fundamental problem with comments are I can't test to know if they are correct in any kind of automated or repeatable fashion.

Alternatives

So if code comments are bad (there are exceptions of course), what should I do instead?

  • Refactor code so that it is soluble
  • Refactor code so that it is testable
  • Use intention-revealing names for classes and members
  • Use intention-revealing names for tests

There are always exceptions to the rule, and some scenarios where code comments are appropriate could be:

  • Explaining third-party libraries, like MSMQ, etc. (that should be hidden behind interfaces anyway)
  • Explaining test results (rare)
  • Explaining usage of a third-party framework like ASP.NET where your code is intimate with their framework

I'd say 99 times out of 100, when I encounter a code comment, I just use Extract Method on the block being commented with a name that might include some of the comment.  Tools like ReSharper will actually examine your code comments and suggest a good name.  When the code comment block is extracted in a method, it's testable, and now I can enforce the behavior through a test, eliminating the need for the comment. 

Monday, May 7, 2007

Consistency in user interface behavior

I know I can't be the only person that gets annoyed by this, but the developers of Windows Messenger and Office Communicator must have been on crack when they determined behavior for the "Close" and "Minimize" button.  Every application I have ever used closes when I hit "X" and minimizes when I hit "_" in the title bar.  Tray icon applications are even smart enough to minimize to the tray when I hit "_".  But they still close when I hit "X".

For some reason MS wants to be above this.  When you hit "X", it doesn't close Communicator.  No, you didn't REALLY mean to close (exit) it, you just wanted to minimize it to the tray icon.  For some funny reason, 99.9% of all tray icon applications actually CLOSE and EXIT when I hit "X".  Even other MS tray icon applications follow this rule.  I use Virtual PC 2007, and when I hit "X", it exits the whole application.  When I hit "_", it minimizes to the tray.  But now I have to think twice when I have VPC 2007 open.  I have to wonder, is this one of those MS applications where close means minimize?  Oops, I clicked "X", and that was really "Close" for this one.  Time to start over.  So now with MS tray icon applications I always click "_" first to try and minimize to tray, and if that doesn't work, I'll click "X" next.  Forever a two-step process, thanks a bunch.  Real intuitive.

Lack of consistency in the behavior of common tasks such as clicking the "X" button just kill me.  You may want a New and Improved Way of doing things, but if you violate the consistency and expected behavior of an operation, you'll likely infuriate your end users no matter how great the new behavior may be.