Monday, April 30, 2007

Swallowing exceptions is hazardous to your health

In my last entry, I set some guidelines for re-throwing exceptions.  The flip side from re-throwing exceptions is swallowing exceptions.  In nearly all cases I can think of, swallowing exceptions is a Very Bad Idea (tm).  So what am I talking about when I say "swallowing exceptions"?  Check out the following code snippet:

public void ProcessRequest()
{
    Message msg = null;

    try
    {
        msg = ReadMessage();
    }
    catch (Exception ex)
    {
        LogException(ex);
    }

    ProcessMessage(msg);
}

This code is fairly straightforward.  I declare a Message, try and read the message, and if something messes up while reading the message, I log the exception.  Finally, I'll process the message I read.

There's a monster lurking about...

Astute code readers will notice that even if there is a problem in ReadMessage, this function will always process the message.  The "try" block has caught an exception, but did not re-throw the exception, so the rest of the execution context in this method is basically unaware of the exception.

Swallowing exceptions is when a Catch block does not re-throw the exception.

ProcessMessage might throw a NullReferenceException, or even an ArgumentNullException.  Now we've left it up to the code following the "try" block to handle error cases, invalid states, etc.  We've just made our lives that much more difficult.

Effect on maintainability

So Bob coded the above snippet, shipped it, then moved on to bigger and better things.  Joe comes along six months later, needing to change the message processing, and notices he gets very strange behavior when ReadMessage doesn't work.  When Joe looks at the code, he's a little baffled.  Hoping to find clear, intention-revealing code, what he found was exception swallowing choking the system.  Joe doesn't know if the original author intended to process bad messages or not.  Joe doesn't know if ProcessMessage is able to handle bad messages.  If Joe has to add a lot of pre- and post-processing to the message, now his life is harder since he has to worry about bad messages.  Why should I process a message if I wasn't able to read it?

Since Joe knows that 99 times out of 100, you shouldn't swallow exceptions, he has to take time to figure out if this was one of those times, or if this is just a bug.  Either way, Joe is wasting time trying to figure out what the system does rather than delivering business value.  Whenever you spend most of your time in the former rather than the latter, that's a clear sign that you don't have a maintainable codebase.

A better solution

So we have a requirement to log exceptions.  That's a good thing.  Ideally, our codebase should be largely unaware of having to catch and log exceptions, since this extra code tends to clutter up what we're working on.  I believe I have two options, either re-throwing the exception or removing the try-catch block and applying it on a higher level.  Here's the former:

public void ProcessRequest()
{
    Message msg = null;

    try
    {
        msg = ReadMessage();
    }
    catch (Exception ex)
    {
        LogException(ex);
        throw;
    }

    ProcessMessage(msg);
}

Now it's fairly clear to Joe that there is a requirement to log exceptions, but this function show never continue its message processing if it encountered problems reading the message.

Some guidelines

So when should we swallow exceptions?

  • When execution needs to continue despite an error
  • The expected behavior of the system is to continue operating and not to crash

By following these guidelines, you'd probably notice that 99 times out of 100, you'll always re-throw the exception.  And when you do swallow exceptions, it's usually in the top level modules of a system.  So be kind, don't swallow exceptions, and some day Joe will thank you for it.

Wednesday, April 25, 2007

Re-throwing exceptions

If this hasn't happened to you already, it will in the future.  And when it does, you'll tear your hair out.  You're reading a defect report, an error log, or a message in an event log that tells you an exception was thrown.  The log tells you what type of exception was thrown, the message, and a stack trace.  Here's my exception message:

System.Exception: Something went wrong.
at WindowsApp.Form1.DoWork() in C:\WindowsApp\Form1.cs:line 47
at WindowsApp.Form1..ctor() in C:\WindowsApp\Form1.cs:line 12
at WindowsApp.App.Main() in C:\WindowsApp\App.cs:line 30

I'm feeling good, since I have some good information about where things went wrong.  Let's look at the code:

34: public void DoWork()
35: {
36:     DoSomething();
37:  
38:     try
39:     {
40:         DoSomethingElse();
41:  
42:         DoAnotherThing();
43:     }
44:     catch (Exception ex)
45:     {
46:         LogException(ex);
47:         throw ex;
48:     }
49:  
50:     FinishUp();
51: }
52:  

Looking back at the exception message, it says that the error occurred at line 47.  But line 47 is just the line that re-throws the exception!  Now I'm in big trouble, because I have no idea if it was the DoSomething or the DoAnotherThing that threw the exception.  I could dive in to each method, but who knows how big that rabbit hole goes.  I probably don't have a choice, so down the rabbit hole we go.

The real issue was that I completely lost the stack trace from the exception.  It looks like the CLR created a new stack trace starting at line 47, instead of somewhere inside the try block.  If I'm debugging or fixing code from an exception message, I really need that stack trace.  So how do we prevent this problem?

Re-throwing exceptions

It's actually a very minor change.  See if you can spot the difference:

34: public void DoWork()
35: {
36:     DoSomething();
37:  
38:     try
39:     {
40:         DoSomethingElse();
41:  
42:         DoAnotherThing();
43:     }
44:     catch (Exception ex)
45:     {
46:         LogException(ex);
47:         throw;
48:     }
49:  
50:     FinishUp();
51: }
52:  

Now let's look at the exception message:

System.Exception: Something went wrong.
at WindowsApp.Form1.DoAnotherThing() in C:\WindowsApp\Form1.cs:line 65
at WindowsApp.Form1.DoWork() in C:\WindowsApp\Form1.cs:line 47
at WindowsApp.Form1..ctor() in C:\WindowsApp\Form1.cs:line 12
at WindowsApp.App.Main() in C:\WindowsApp\App.cs:line 30

Now I get the full stack trace.  The exception actually occurred down in the DoAnotherThing method.  I can debug this error with ease, all because I used this:

47: throw;

instead of this:

47: throw ex;

Wrapping exceptions

Often times we don't want specific exceptions showing up in the user interface.  Other times we detect an exception and want to throw a more general exception.  This is called wrapping exceptions, which is catching an exception and throwing a different one:

public void DoWork()
{
    DoSomething();

    try
    {
        DoSomethingElse();

        DoAnotherThing();
    }
    catch (Exception ex)
    {
        LogException(ex);
        throw new ApplicationSpecificException("Something bad happened", ex);
    }

    FinishUp();
}

Note that when I threw a different exception, I passed in the original exception into the constructor.  This preserves the stack trace, and exception messages will display the full stack of the inner exception.  If I didn't pass in the exception, I'd be in the exact same boat as before, not having any idea where to look to debug exceptions because the stack trace would be incorrect.  Since I used the constructor that uses an inner exception, I get to keep my hair.  Of course, the developer who wrote ApplicationSpecificException knows how to correctly implement custom exceptions because they've read Framework Design Guidelines, so we don't have to worry whether or not the ApplicationSpecificException has the constructor we need.  I hope.

And the moral of the story is...

Without a proper stack trace, you can be absolutely lost in trying to debug an exception.  To keep your sanity and potentially your health (from getting thrashed by other developers), just follow a couple easy guidelines:

  • When re-throwing exceptions, use "throw;" instead of "throw ex;"
  • When wrapping exceptions, use a constructor that takes the inner exception

I didn't get into swallowing exceptions, that's an even worse nightmare.  I'll save that one for a future post.

For IL buffs

The difference between "throw;" and "throw ex;" is actually in the IL.  Two different IL instructions are used, "rethrow" and "throw" respectively.  This functionality isn't magic, it's built into the Framework at the IL level.

Tuesday, April 24, 2007

Notes on defects

Right now I'm working on defects for one version of a project, and soon to be starting development on the next version. Whenever I start work on defects, I always take a quick read from the Software Defect Reduction Top 10 List. It was published a few years back by the Center of Empirically Based Software Engineering from a grant from the National Science Foundation. The list is a summary of the top 10 techniques to help reduce flaws in code, based on empirical data gathered from a variety of sources. The paper is a quick read, only three pages, and there are some other interesting research papers on agile development and pair programming on the CeBASE website. Here's the list:

  1. Finding and fixing a software problem after delivery is often 100 times more expensive than finding and fixing it during the requirements and design phase.
  2. About 40-50% of the effort on current software projects is spent on avoidable rework.
  3. About 80% of the avoidable rework comes from 20% of the defects.
  4. About 80% of the defects come from 20% of the modules and about half the modules are defect free.
  5. About 90% of the downtime comes from at most 10% of the defects.
  6. Peer reviews catch 60% of the defects.
  7. Perspective-based reviews catch 35% more defects than non-directed reviews.
  8. Disciplined personal practices can reduce defect introduction rates by up to 75%.
  9. All other things being equal, it costs 50% more per source instruction to develop high-dependability software products than to develop low-dependability software products. However, the investment is more than worth it if significant operations and maintenance costs are involved.
    • It also notes that a typical life cycle cost is about 30% development and 70% maintenance.
  10. About 40-50% of user programs enter use with nontrivial defects.

These numbers didn't make a lot of sense to me until I worked on my first large-scale, many month project. When I noticed how much time I was spending fixing defects rather than delivering business value, I started taking this paper more seriously. The main points I was able to take out of the paper were:

  • Defects are expensive to fix
  • Peer review and sound engineering practices can greatly reduce the number of defects
  • Maintainability should be a top, if not the top concern during development

Has anyone else run into these same types of numbers on projects?

Monday, April 23, 2007

Feedback

I found a great list of items about feedback here. For a TFS version, substitute "Automaton" or "Orcas Beta 1" for "Cruise Control".

  • Because our customer doesn't know what he wants, he finds out from the people that want the system. He sometimes gets this wrong.
  • Because I don't know what to code, I find out from our customer. I sometimes get this wrong.
  • Because I make mistakes while coding, I work with an IDE. My IDE corrects me when I'm wrong.
  • Because I make mistakes while thinking, I work with a pair. My pair corrects me when I'm wrong.
  • Because my pair is human and also makes mistakes, we write unit tests. Our unit tests correct us when we're wrong.
  • Because we have a team who are also coding, we integrate with their code. Our code won't compile if we're wrong.
  • Because our team makes mistakes, we write acceptance tests that exercise the whole system. Our acceptance tests will fail if we're wrong.
  • Because we make mistakes writing acceptance tests, we get QA to help us. QA will tell us if we're wrong.
  • Because we forget to run the acceptance tests, we get Cruise Control to run them for us. Cruise Control will tell us if we're wrong.
  • Because we forget to maintain the acceptance tests, we get QA to check that the system still works. QA will tell us if it's wrong.
  • Because we only made it work on Henry's laptop, we deploy the system to a realistic environment. It won't work if the deployment is wrong.
  • Because we sometimes misunderstand our customer, we showcase the system. Our customer will tell us if we're wrong.
  • Because our customer sometimes misunderstands the people that want the system, we put the system in production. The people who want it tell us if we're wrong.
  • Because it costs money to get it wrong, we do all these things as often as we can. That way we are only ever a little bit wrong.

    I've lost count the number of times better feedback has enabled me to make quicker and better decisions in development. I've also lost count the number of times lack of feedback or lack of timely feedback came back and bit me later. That's why we see so many efforts to automate builds and tests. It's no secret that better feedback and communication from business owners increases the likelihood of success on a project. It follows then that better feedback and communication from our code should also give a similar effect.

  • Classifying tests

    When defining a testing strategy, it's important to take a step back and look at what kinds of tests we would like to develop. Each type of test has a different scope and purpose, and can be developed by different roles in an organization. Most kinds of testing are concerned with verification and validation, also known as "V&V". Verification deals with lower level, "am I building the software right" kinds of tests. This usually entails some form of unit testing. Validation answers the question "am I building the right software", which focuses on higher level or business related tests. I've usually seen tests broken down into the following categories:

    • Unit tests
    • Integration tests
    • Functional tests

    I've ignored regression and performance tests, because both of these tests take the form of any of the above test types. These test definitions come more from the agile school of testing.

    Unit tests

    So you just wrote a piece of code to implement some feature. Does it work? Can you prove it? If I come back tomorrow, can you prove it again? What about next week, or next month, or next year? If another developer modifies your code, can they prove it still works, and didn't break your functionality?

    Unit testing answers the first two questions. Namely, does my code do what I think it's supposed to do? Continuous Integration will answer the rest.

    Scope

    The scope of a unit test is a single unit, usually a single public member. Unit tests should only test that unit, with no interaction with external dependencies. Unit tests can be defined as:

    • Isolated from external dependencies
    • Repeatable
    • Testing only one thing
    • Easily readable
    • Order independent
    • Side effect free
    • Having only one reason to fail
    • Written by the developer
    • Fast to run

    If a unit test hits the database, then it's no longer a unit test because it depends on an external module. If the database goes down, I'm not testing the unit anymore, I'm testing to see if the database is up. Other external dependencies include:

    • Databases
    • HttpContext items (Querystring, session, etc.)
    • Web services
    • File I/O
    • Anything else that accesses the network or the disk

    If I can remove external dependencies of a class during unit testing, I can greatly reduce the number of reasons for a unit test to fail. Ideally, my tests are also automated so they can be easily run. On my last project, we started writing unit tests on all new development, and regression tests on existing behavior. We wound up with over 1200 unit tests in about six months for about 30K lines of code, covering nearly 100% of new code (~98%) and about 50-75% of the existing codebase. It's also typical to see a 1:1 or 1.5:1 ratio of unit test code to production code in a team practicing TDD.

    Purpose

    Unit tests should test only one thing and have only one reason to fail. In a future post, I'll dive more into unit testing with examples. Since programmers are human and make mistakes on a daily basis, unit tests are a developer's back up to verify small, incremental and individual pieces of functionality are implemented the way the developer intended. Unit tests strive to answer the question "does my code do what I think it's supposed to do?"

    Integration tests

    I've written a suite of unit tests that verify the functionality I was trying to create. I've abstracted external dependencies away so I can make sure that my tests have only one reason to fail. But in production, the web service does exist. The database is there. Emails are sent out, queue messages sent, files created and read. Integration tests put the different pieces together to test on a macro scale. Integration tests can be defined as:

    • Tests for dependencies between objects and external resources
    • Repeatable
    • Order independent
    • Written by an independent source (QA)

    In other words, integration tests are more end-to-end to determine if the entire system is functioning correctly.

    Scope

    As you can see, integration tests are more on the "macro" scope of testing. These tests can be written in a unit testing framework, but are also written in other frameworks such as FIT or WATIR. These are the tests to make sure that the database is written to correctly, that an email was sent, and a MSMQ message was written correctly.

    Purpose

    In the real world, we have external dependencies and we should test them. Integration tests verify that the code functions correctly in the presence of these external dependencies. We can test if a single module works with the external dependencies present, or if a group of modules works together correctly. Either way, integration tests are about putting the pieces together and determining if they fit.

    Functional tests

    We've developed some software and have tests that isolate dependencies and embrace them. But we're still not guaranteed that we've written the software that business has demanded. Having unit and integration tests is fine and dandy, but if our software doesn't meet business requirements, it's just been a huge waste of time and money. Functional tests are:

    • Developed by, or with the direct assistance of, the business owners
    • Focused on verifying business requirements
    • Written in a very high-level language such as a DSL

    Scope

    Functional tests are as end-to-end as tests come. These tests are performed on the topmost layers of an n-tier architecture, as these are the layers seen by the business or the customers. These tests are often the slowest, and can be a mix of automated and manual testing. These tests can also be used to sign off on project completion.

    Purpose

    Business has a set of requirements the software must meet, and functional tests are the way of verifying our software meets those requirements. Since requirements come in may forms, functional tests can be any kind of tests to verify those requirements, from "Does the website look good" to "When I check out, is the terms and conditions displayed".

    Looking back, looking forward

    I'm sure there are more types of testing we could look at, including user acceptance or user interface testing. These testing types go outside the scope of verification and validation, so I left them out of this post.

    Automating your tests will give you insight to information such as code coverage and other code metrics, especially if they are part of a nightly or continuous integration build. As the scope of tests grows from unit to functional, and the number of bugs discovered should decrease as well. It's much easier to fix a bug in a unit test than in a functional test because of the scope of each type. If I have fewer places to look, I've cut the time to find and fix the bug dramatically. Since it isn't enough just to deliver functionality to the business, but to also create a clean and maintainable codebase, a solid suite of tests is absolutely critical for enabling changes for future business requirements. And if they were automated, that would be really cool too :).

    Monday, April 16, 2007

    Evils of duplication

    Everyone is guilty of using "Ctrl-C, Ctrl-V" in code.  During development, we may see opportunities to duplicate code a dozen times a day.  We're working on some class Foo that needs some behavior that is already implemented in class Bar.  The quickest way to implement this is Copy, Paste, and call it a day.  But what are the long-term implications of this?  If this is a codebase that must be maintained for months, even years, what will these seemingly innocuous keyboard coding shortcuts do to us?  The answer is not pretty.

    A complex world

    The truth is, coding isn't that hard.  Give me a feature, and on a clean slate system, I could have it done in two weeks.  I code it as fast as I can, and with some solid testing, I can be reasonably sure that you'll get what you asked for.

    Fast forward six months.  Everything is great with the feature I implemented, but I'm requested to make some small change.  Now I'm in trouble.  The small change isn't that difficult, but now I need to figure out how that affects the existing system.  I'm in luck though, it looks like there's already something similar in the codebase, and a quick Copy-Paste second later, the changes are done (with some minor tweaks, of course).  I get tired of making these changes for these ungrateful customers, so I decide to leave for greener pastures and cleaner slates.

    Now the customer wants another change, but this time it affects both features.  I'm not around, and I'm not answering emails, so it's up to Joe to fix this problem.  Not being familiar with the codebase, Joe resorts to searching for changes he must make.  He finds the two features (a couple of days later) and makes the change in both places.  I think you can see where this is going.  Any change that affects the two features will have to be made in both places, and it will be up to the developer to remember or be lucky enough to find the two places where the changes are made.  Instead of taking a couple of days to make a change, it takes a month, because Joe has to make darn sure that this code wasn't copied and pasted a dozen other times.

    Conditions deteriorate

    If Copy Paste is rampant in a codebase, maintainability vanishes.  We'd spend 95% of our time just figuring out what the code is doing, where the changes should be made, and how many other places I need to fix since the logic is duplicated many times rather than actually coding the changes.  All because I decided to take a shortcut and Copy Paste.  Our estimates need to go up even though the changes aren't hard because we have no idea how many places we have to make the same fix.

    Real-world example

    I had a defect recently where I needed to know whether the current web customer was a Business (rather than Individual) customer or not.  I found some logic in a base page class that looked like this:

    protected bool IsBusiness
    {
    get
    {
    if ((StoreContext.Current != null) && (StoreContext.Current.StoreSettings != null) && (StoreContext.Current.StoreSettings.CustomerType != null) && (StoreContext.Current.StoreSettings.CustomerType.Trim().Length>0))
    {
    if (StoreContext.Current.StoreSettings.CustomerType.ToUpper() == "BUSINESS")
    {
    return true;
    }
    }
    return false;
    }
    }

    This looks pretty good, but it's on a Page class and I'm working with a UserControl.  I could just copy paste this code block and be finished.  But what if this logic ever changed?  What if instead of "BUSINESS", the compare string needed to change to "CORPORATE".  As a quick sanity check, I looked for other instances of this logic to see if I couldn't use a different class.  Bad news.  This same block of code is used over a dozen times in my solution.  That means that this string can never change because the cost of change is too great and it affects too many modules.

    A solution

    What I'd really like is for this logic to be in only one place.  That is, logic should appear once and only once.  Here's a static helper class I would use to solve this problem (with some minor refactorings*):

    public static class SettingsHelper
    {

    public static bool IsBusiness()
    {
    StoreContext context = StoreContext.Current;

    if ((context == null) ||
    (context.StoreSettings == null) ||
    (context.StoreSettings.CustomerType == null))
    return false;

    string customerType = context.StoreSettings.CustomerType.Trim();

    return (string.Compare(customerType, "BUSINESS", StringComparison.OrdinalIgnoreCase) == 0);
    }

    }

    When I eliminate duplication, I increase maintainability and lower the total cost of ownership.  Duplication occurs in many forms, but Copy Paste is the easiest for the developer to eliminate.  Just don't do it.  If you feel the urge to use that "Ctrl-C, Ctrl-V" combination, stop and think of Joe, who has to clean up your mess.  Think about where this behavior should be, and put it there instead.  I'd try to put the behavior as close to the data as possible, on the class that contains the data if possible.  If it requires a helper class, create one.  Just do whatever it takes to prevent Copy Paste duplication, since we pay many times over in maintenance costs for maintaining duplicated logic.

    Greener pastures

    We have a codebase that's been successful for years.  I'll try to treat it like my home, and clean as I go.  I won't introduce duplication, and I'll do my best to eliminate it where I find it.  Doing so will ensure that this codebase will continue to be successful in the years to come.  I've been that developer Joe and it's not fun.

     

    * Refactorings:

    Thursday, April 12, 2007

    Example of creating scope with the using statement

    The using statement is widely used for cleaning up resources, such as a database connection or file handle.  To use the using statement, the variable being scoped just needs to implement IDisposable.  There are many times when I've needed to create a scope for behavior, and I've taken advantage of IDisposable to do this.

    The problem

    When I'm writing unit tests for a specific module of code, often times I would like to inject a different implementation of certain services at runtime without having to change the original code. For example, I have a class that calls into the ConfigurationManager to set some internal configuration values:

    public class DatabaseReader
    {
        public DataSet GetOrdersForCustomer(Guid customerId)
        {
            string connString = ConfigurationManager.AppSettings["DBConnString"];
            OracleConnection conn = new OracleConnection(connString);
            
            …
            
            //return dataset
        }
    }

    Well, if I want to test this in a unit test, I won't be able to because I'm coupled to the ConfigurationManager class, which requires a .config file. This file doesn't exist in my test library, so this block will throw an exception.

    Some refactoring

    What if I try to move the configuration logic outside this class into its own implementation? Slap in an interface and a factory, and this is what we get:

    public interface IConfigurationProvider
    {
        string GetValue(string key);
    }

    public class ConfigurationProviderFactory
    {
        public static IConfigurationProvider GetInstance()
        {
            return new SettingsConfigProvider();
        }
        
        private class SettingsConfigProvider : IConfigurationProvider
        {
            public string GetValue(string key)
            {
                return ConfigurationManager.AppSettings[key];
            }
        }
    }

    public class DatabaseReader
    {
        public DataSet GetOrdersForCustomer(Guid customerId)
        {
            IConfigurationProvider configProvider = ConfigurationProviderFactory.GetInstance();
            
            string connString = configProvider.GetValue("DBConnString");
            OracleConnection conn = new OracleConnection(connString);
            
            …
            
            //return dataset
        }
    }

    The GetOrdersForCustomer method now uses an IConfigurationProvider instance to get its configuration. A specific instance is provided by the ConfigurationProviderFactory.

    My broken unit test

    But I'm not much better off. I created a whole layer of indirection just so I could factor out the configuration code. Here's the test code I'm trying to write:

    [Test]
    public void CustomerContainsCorrectOrders()
    {
        IConfigurationProvider fakeProvider = CreateFakeConfigProvider(); // returns a hard-coded implementation
        
        DatabaseReader reader = new DatabaseReader();
        
        Guid customerId;
        
        // Now I want to use my fake configuration provider instead of the built-in one
        DataSet ds = reader.GetOrdersForCustomer(customerId);
        
        Assert.AreEqual(1, ds.Tables[0].Rows.Count);
    }

    My ideal unit test

    I'm going to add some functionality to the ConfigurationProviderFactory to be able to return an IConfigurationProvider instance I give it when GetInstance is called. Also, I'd like to encapsulate this behavior in a using block like so:

    [Test]
    public void CustomerContainsCorrectOrders()
    {
        IConfigurationProvider fakeProvider = CreateFakeConfigProvider(); // returns a hard-coded implementation
        
        DatabaseReader reader = new DatabaseReader();
        
        Guid customerId;
        
        using (ConfigurationProviderFactory.CreateLocalInstance(fakeProvider)) // use the fakeProvider instance inside this block
        {
            DataSet ds = reader.GetOrdersForCustomer(customerId);
        }
        
        Assert.AreEqual(1, ds.Tables[0].Rows.Count);
    }

    The solution

    So this code implies that CreateLocalInstance needs to return an IDisposable instance. I don't have one yet, so let's create one. What I'd like to happen is for CreateLocalInstance to save the IConfigurationProvider passed in as a local variable, and have the GetInstance return that instead. Once I'm outside the using block, GetInstance should revert back to normal. I need something that will allow custom code to be run at the end of the using block in the Dispose method to accomplish this. Here's what I came up with, borrowed from ideas from Ayende:

    public delegate void DisposableActionCallback();

    public class DisposableAction : IDisposable
    {
        private readonly DisposableActionCallback _callback;

        public DisposableAction(DisposableActionCallback callback)
        {
            _callback = callback;
        }

        public void Dispose()
        {
            _callback();
        }
    }

    What DisposableAction allows me to do is to pass custom code via a callback method to be called whenever the DisposableAction goes out of scope. That will be what we return from our CreateLocalInstance method:

    public class ConfigurationProviderFactory
    {
        private IConfigurationProvider _localProvider = null;
        
        public static IConfigurationProvider GetInstance()
        {
            if (_localProvider != null)
                return _localProvider;
                
            return new SettingsConfigProvider();
        }
        
        public static IDisposable CreateLocalInstance(IConfigurationProvider localProvider)
        {
            _localProvider = localProvider;
            return new DisposableAction(delegate
            {
                _localProvider = null;
            });
        }
        
        ...
    }

    I actually added two things in this code block. First, I created the CreateLocalInstance method that uses a DisposableAction. It sets the private _localProvider to what was passed in. Next, it returns a DisposableAction with an anonymous method callback to set the _localProvider back to null.

    The other change I made was I modified the GetInstance method to be aware of the _localProvider variable and return it instead. The effect is a using block calling the CreateLocalInstance will make the GetInstance method return the new local IConfigurationProvider instead.

    Closing comments

    I was able to change the configuration code inside the GetCustomerOrders to use an abstraction of the IConfigurationProvider and created a factory which allowed me to inject custom IConfigurationProvider implementations without affecting any production code. And, I was able to take advantage of IDisposable and the using statement to do this. Obviously, I'll need additional tests to confirm existing functionality isn't broken.

    But my test passes, and in the meantime I've opened the DatabaseReader class for extensibility for other IConfigurationProvider implementations, like one from pulling from the database or from SiteInfo.