Customer-extensible configuration

While playing around with custom resources I wondered what ways there are to configure which IResourceManager is used for those generated classes.

As I mentioned before I’m not particularly fond of XML for configuration purposes. But the *.config files are still the most commonly used means to configure a .NET application.

My goal was to allow a developer to configure which of a set of predefined resource managers to use (an obvious choice would be the file based approach with the .NET ResourceManager and maybe a database based implementation) while allowing him to add his own implementations later on. That kind of extensibility calls for an abstract factory approach.

The config file should look something like this

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="i18n" type="Playground.I18nSettingsSection, Playground" />
  </configSections>
  <connectionStrings>
    <clear />
    <add name="DEFAULT" connectionString="localhost"/>
  </connectionStrings>
  <i18n>
    <resxManager>
      <db connectionStringName="DEFAULT" />
    </resxManager>
  </i18n>
</configuration>

File 1: What I wanted my App.config to look like

The code behind that solution should be quite simple. Along the lines of:

public class I18nSection : ConfigurationSection
{
  [ConfigurationProperty("resxManager")]
  public ResourceManagerSettings ResourceManager
  {
    get { return (ResourceManagerSettings) base["resxManager"]; }

    set { base["resxManager"] = value; }
  }
}

public abstract class ResourceManagerSettings : ConfigurationElement
{
  public abstract IResourceManager GetResourceManager(Type resourceFileType);
}

File 2: What I thought might be a good idea for the code behind

When was anything ever that easy? The whole thing blew up in my face. The underlying problem being that the .NET configuration system cannot create an instance of an abstract class (the ResourceManagerSettings) and you can neither get access to the code where the instantiation happens (it’s a private method somewhere inside ConfigurationElement) nor can you handle it via overriding OnDeserializeUnrecognizedElement in your section class. The element is not truly unrecognized if you decorate the ResourceManager property with the ConfigurationPropertyAttribute so the method would never be triggered. You can remove the attribute but then where do you store the created object? You can’t call base["resxManager"] anymore because there is no longer a ConfigurationProperty with that name. And what if you had more than one “unrecognized element” in that section? You couldn’t assume that you where to create a concrete implementation of your abstract ResourceManagerSettings which would make figuring out which class to instantiate quite a bit harder (remember that you want the developer to be able to extend that solution later).

So I had to figure out another approach. What eventually worked was moving the whole concept up one level in the hierarchy of the config system. Where I formerly used a ConfigurationSection I had to use a ConfigurationSectionGroup as base class. And the ConfigurationElement became a ConfigurationSection.

What I ended up with was an App.config that looks like this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <sectionGroup name="i18n" type="Playground.I18nSettingsSectionGroup, Playground">
      <!--<section name="file" type="Playground.FileResourceManagerSettings, Playground"/>-->
      <!--<section name="null" type="Playground.NullResourceManagerSettings, Playground"/>-->
      <section name="db" type="Playground.DbResourceManagerSettings, Playground"/>
    </sectionGroup>
  </configSections>
  <connectionStrings>
    <clear />
    <add name="DEFAULT" connectionString="localhost"/>
  </connectionStrings>
  <i18n>
    <db connectionStringName="DEFAULT" />
  </i18n>
</configuration>

File 3: Actual App.config

And the code behind to match the .config file:

public class I18nSettingsSectionGroup : ConfigurationSectionGroup
{
  public const string NAME = "i18n";

  private ResourceManagerSettings _ResourceManagerSettings;

  public ResourceManagerSettings ResourceManager
  {
    get
    {
      if (this._ResourceManagerSettings != null)
      {
        return this._ResourceManagerSettings;
      }

      if ((this._ResourceManagerSettings = this.Sections.OfType<ResourceManagerSettings>().SingleOrDefault()) == null)
      {
        this._ResourceManagerSettings = new FileResourceManagerSettings();
        this.Sections.Add(FileResourceManagerSettings.NAME, this._ResourceManagerSettings);
      }

      return this._ResourceManagerSettings;
    }

    set
    {
      ResourceManagerSettings resourceManagerSettings = this.Sections.OfType<ResourceManagerSettings>().SingleOrDefault();

      if (resourceManagerSettings != null && !Equals(resourceManagerSettings, value))
      {
        this.Sections.Remove(resourceManagerSettings.SectionInformation.SectionName);
      }

      this.Sections.Add(value.SectionInformation.Name, value);
    }
  }
}

public abstract class ResourceManagerSettings : ConfigurationSection
{
  public abstract IResourceManager GetResourceManager(Type resourceFileType);
}

public class FileResourceManagerSettings : ResourceManagerSettings
{
  public const string NAME = "file";

  public override IResourceManager GetResourceManager(Type resourceFileType)
  {
    return new ResourceManagerWrapper(new ResourceManager(resourceFileType.FullName, resourceFileType.Assembly));
  }
}

File 4: Actual implementation

In the <configSections> part of the .config file I configure the I18nSectionGroup with at most one (!) implementation of the ResourceManagerSettings. If you add multiple implementations the .NET configuration system will instantiate all of them when you read from the file. And now you would have to figure out which one you actually wanted to use. So (by design!) the code above will fail in case you configured more than one ResourceManagerSetting.

If you don’t configure anything the I18nSectionGroup will fall back to the FileResourceManagerSettings. I believe that’s an acceptable default.

When you set the I18nSectionGroup.ResourceManager property at run-time and save the configuration back to disk the correct section type will be persisted in the <configSections> part of your .config file.

So what I got is not exactly what I wanted. If the .NET framework weren’t as uptight about the object creation (the configuration system is by no means the only part of the framework that behaves like that!) the whole exercise would have been a lot easier. To MS’ defense: From the comments in the code of the ConfigurationElement they seemed to have some security considerations going on. Still, they might (at least) have provided a way to influence what type of object should be created if they couldn’t/didn’t want to let you handle the instantiation itself.

Anyway. You now have a means to configure where your resources are loaded from and if you want to add another source (like RavenDB for example) you can always do so with little effort. If you either mark your resource classes with an interface or a custom Attribute or decide on a naming convention it is really easy to brew up a little reflection code that sets the static ResourceManager property at application start-up.

Advertisement

Item templates and custom resources

In a previous post I wrote about Custom resources with T4 templates. Back then I used the ReSharper multi-file templates to add the .resx and .tt files to my projects. The one downside of that approach: it requires R# 8.x which might not be readily available.

Thus I wanted to find out how to create a Visual Studio item template that achieves the same goal. Microsoft has a step-by-step guide that explains the process. That gives you the basics. But there is some fine-tuning that they don’t explain.

If you add a .resx file to your project the generated .Designer.cs file is hidden in the solution explorer. I wanted to do the same and hide the .tt file underneath the .resx file. Preferably I wanted both the .tt and the generated .Designer.cs file on the level directly below the .resx file (as shown in the first screenshot). But the TextTemplatingFileGenerator always puts the generated output file on the level below the .tt file (as in the second screenshot) on every run. I decided to stop battling VS. Its not worth the effort in this case.

Screenshot 1: Nice to have

Screenshot 1: Nice to have

Screenshot 2: What you actually get

Screenshot 2: What you actually get

To actually hide the .tt file via the item template is quite easy if not really prominently advertised. There’s a post on stackoverflow that explains how you need to modify your template.

The second <ProjectItem> is the interesting part. You need to prefix the .tt file with the path to the .resx file. When the item template is invoked that will translate to a <DependentUpon> clause in your project file.

<VSTemplate Version="3.0.0" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" Type="Item">
  <TemplateData>
    <DefaultName>Resource.resx</DefaultName>
    <Name>Customizable resources</Name>
    <Description>Creates a .resx file that uses a T4 template to generate strongly typed resources.</Description>
    <ProjectType>CSharp</ProjectType>
    <SortOrder>10</SortOrder>
    <Icon>__TemplateIcon.ico</Icon>
  </TemplateData>
  <TemplateContent>
    <References>
      <Reference>
        <Assembly>System</Assembly>
      </Reference>
      <Reference>
        <Assembly>mscorlib</Assembly>
      </Reference>
    </References>
    <ProjectItem SubType="" TargetFileName="$fileinputname$.resx" ReplaceParameters="true">Resource.resx</ProjectItem>
    <ProjectItem SubType="" TargetFileName="$fileinputname$.resx\$fileinputname$.tt" ReplaceParameters="true">Resource.tt</ProjectItem>
  </TemplateContent>
</VSTemplate>

File 1: MyTemplate.vstemplate

Also make sure that you set ReplaceParameters="true" for the .resx file. You will want to add template parameters for the namespace and the class name.

For that I used two of the predefined parameters: $rootnamespace$ and $safeitemname$. Note that the first one gives you the full path to your .resx file and not the root namespace of the current project! If you place the resources in project Foo in the folder Assets the value of $rootnamespace$ will thus be Foo.Assets. Maybe someone at MS thought that’s a funny way to lead developers on a wild goose chase …

And that’s what the .tt file looks like

//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.34003
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------
<#@ template hostspecific="true" language="C#" #>
<#@ output extension=".Designer.cs" #>
<#@ assembly name="System.Xml" #>
<#@ assembly name="System.Xml.Linq" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Xml.Linq" #>

namespace $rootnamespace$
{
    using System;
    using System.ComponentModel;
    using System.Globalization;
    using System.Resources;
    using System.Threading;
    using MyI18n;

    public class $safeitemname$
    {
        private static IResourceManager resourceManager;
        private static CultureInfo resourceCulture;

        [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
        public static IResourceManager ResourceManager
        {
            get
            {
                if(resourceManager == null)
                {
                    IResourceManager temp = new ResourceManagerWrapper(new ResourceManager("$rootnamespace$.$safeitemname$", typeof($safeitemname$).Assembly));
                    resourceManager = temp;
                }

                return resourceManager;
            }

            set
            {
                resourceManager = value;
            }
        }

        [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
        public static CultureInfo Culture
        {
            get
            {
                return resourceCulture;
            }

            set
            {
                resourceCulture = value;
            }
        }
<#
    string resxFileName = this.Host.TemplateFile.Replace(".tt", ".resx");
    XDocument doc = XDocument.Load(resxFileName);

    if(doc != null && doc.Root != null)
    {
        foreach(XElement x in doc.Root.Descendants("data"))
        {
            string name = x.Attribute("name").Value;
            WriteLine(string.Empty);
            WriteLine("        public static string " + name);
            WriteLine("        {");
            WriteLine("            get { return $safeitemname$.ResourceManager.GetString(\"" + name + "\", resourceCulture ?? CultureInfo.CurrentUICulture); }");
            WriteLine("        }");
        }
    }
#>
    }
}

File 2: Resource.tt

Don’t forget to adjust your using statements to the location of the IResourceManager interface and the ResourceManagerWrapper class.

Your .zip file should look something like this:

Screenshot 3: Contents of the .zip file

Screenshot 3: Contents of the .zip file

 

Now copy it to "C:\Users\[YOUR USERNAME]\Documents\Visual Studio [YOUR VISUAL STUDIO VERSION]\Templates\ItemTemplates\Visual C#" and fire up VS. Once you click “Add new item” your new template should appear right on top of the “Visual C# Items”.

Screenshot 4: Add new item

Screenshot 4: Add new item

 

You can download the template from my pet project’s site.

TypedFactory reloaded

Another piece of code that I wrote some time ago came back to my attention recently and I decided to give it the same treatment as the SmartConstructor and do some cleanup on the TypedFactory port I did for Unity.

One thing I really disliked at the time was that the Unity Interception Extension cannot generate a “proxy without target” out of the box. If you want to intercept calls to an object you actually need to have that object. A pipeline with no target isn’t possible with Unity.

As I had experimented with IL code generation before I took the long way round and built a NullObject generator. Not pretty. But it worked.

Another issue was the need to fumble an instance of the container into the interception pipeline by putting it in a dictionary, retrieving it down the pipeline, cast it back to a container and then start working with it. For the life of me I couldn’t find an architecturally clean way to solve that issue then. But the ugly one worked.

If you can’t see the forest for the trees you need to take a step back. A year later I took a look at a post that I had read back then and all of a sudden the pieces fell into place.

As an answer to a question in the Unity discussion forum Chris Tavares solved both of my problems. I just didn’t get it the first time.

The code snippet to generate a proxy without a target (or more precisely an interception pipeline without a target) is really simple.

Intercept.NewInstanceWithAdditionalInterfaces(
  Type type,
  ITypeInterceptor interceptor,
  IEnumerable<IInterceptionBehavior> interceptionBehaviors,
  IEnumerable<Type> additionalInterfaces,
  params object[] constructorParameters)

 

This method gives us all that we need. The type parameter lets you specify the type of the target object. As we don’t actually need a target typeof(object) is good enough. The VirtualMethodInterceptor from Chris’ answer is what we will use as interceptor. The factory interface we want to implement goes into the additionalInterfaces. As our target is a simple object we don’t need to bother about constructorParameters.

The actual factory logic is implemented as an IInterceptionBehavior called FactoryBehavior (I’m all for self-explanatory names by the way). But that behavior still needs an instance of the container which is just not available at the place this is all wired up.

Again, Chris’ post gave me a solution that I just didn’t recognize.

Anyway, to hook it up to the container, the quickest thing to do would be to use InjectionFactory to call the Intercept.NewInstance API.

InjectionFactory is derived from InjectionMember (as is TypedFactory). InjectionMembers are Unity’s way to make very specific additions to the build pipeline of a type. An InjectionFactory tells the  container how to construct an object instead of letting the container figure out how to do that itself.

InjectionFactory has two constructors that each take a delegate that lets you specify how to create your object of interest.

I’ll just show you the signature of the greedier one of the two.

public InjectionFactory(Func<IUnityContainer, Type, string, object> factoryFunc)

The first parameter is a container out of the depths of the Unity infrastructure. And its totally for free. Below you see the new implementation of the AddPolicies(Type, Type, string, IPolicyList) method of the TypedFactory.

public override void AddPolicies(Type ignore, Type factoryType, string name, IPolicyList policies)
{
  if (factoryType.IsInterface)
  {
    InjectionFactory injectionFactory = new InjectionFactory(
      (container, t, n) => Intercept.NewInstanceWithAdditionalInterfaces(
        typeof(object),
        new VirtualMethodInterceptor(),
        new IInterceptionBehavior[] { new FactoryBehavior(container, this.selector) },
        new[] { factoryType }));

    injectionFactory.AddPolicies(ignore, factoryType, name, policies);
  }
  else if (factoryType.IsAbstract)
  {
    InjectionFactory injectionFactory = new InjectionFactory(
      (container, t, n) => Intercept.NewInstance(
        factoryType,
        new VirtualMethodInterceptor(),
        new IInterceptionBehavior[] { new FactoryBehavior(container, this.selector) }));

    injectionFactory.AddPolicies(ignore, factoryType, name, policies);
  }
  else
  {
    throw new ArgumentException("'factoryType' must either be an interface or an abstract class.", "factoryType");
  }
}

It supports interfaces and abstract classes as factoryType. Based on that distinction we either use the Intercept.NewInstanceWithAdditionalInterfaces or Intercept.NewInstance method to create our factory implementation. The container provided by the InjectionFactory gets forwarded into our FactoryBehavior and we are done. The rest of the implementation stays the same.

Now this is more like it. We reuse large parts of the Unity infrastructure instead of reinventing the wheel and quite a bit of code becomes obsolete. As usual you can find the code for the updated TypedFactory on my CodePlex site (project TecX.Unity[Factories] and the tests that show how it works under TecX.Unity.Test[Factories]).

Please not another NullReference

I’m just sitting in front of a piece of code. Only about 30 lines long it contains more than 10 checks for null values. It is barely readable even after I combined some of these checks into methods with meaningful names. I don’t dare to think about what any complexity metric would tell me about it 😦

I know that inventing the Null Reference was one of the biggest mistakes in the history of computer science. But just because I know that doesn’t help me much when dealing with the aftermath of that mistake on a daily basis.

There are a few things that help though.

Never return null!

Your methods should never ever in any case return null! Ever! I mean it!

For strings there is string.Empty.

For enumerations there is Enumerable.Empty<T>() or the empty array T[0]. If you use lists or collections (preferably represented by their respective interfaces) there is the (newly created) empty list.

For infrastructure components there is always some implementation of the NullObject Pattern (think about a NullLogger that does not write anything anywhere).

For data objects you can define an Empty property that contains an object that is valid in the sense of “I’m valid by the rules of your programming language” but that is defined as invalid by your business rules (think in terms of Guid.Empty). Make sure to enforce the use of these properties. In addition to anything else it will make your code more readable.

For numeric IDs you should define a lower threshold for what is valid and what not. Usually IDs that are less than or equal to zero are not valid. They are just the default that .NET assigns to numeric types. Make that decision explicit! That way it is safe to check for invalid IDs anywhere you use them.

Oh and while I’m at it: Don’t throw NullReferenceExceptions when validating the parameters of a method. Use Guard classes that throw ArgumentNullExceptions or Code Contracts that throw their own special type of exception. NullReferenceExceptions are system exceptions and should stay that way.

Custom resources with T4

Resource files are an old hat in .NET. They have been around since v1.1 and are one popular way to localize strings in an application.

Visual Studio provides two built-in tools (namely PublicResXFileCodeGenerator and ResXFileCodeGenerator) that generate code from the *.resx files. They generate classes with different visibility (public vs. internal) but the result is otherwise the same. You get a property to get and set the culture of the resource (to override the usage of the current UI culture), a hardwired instance of the framework’s ResourceManager and a bunch of static properties that forward the call to their getter to the ResourceManager. Nothing truly spectacular so far.

The ResourceManager takes care of the hassles of getting the correct version of your resources out of the files and the static properties allow for very convenient usage throughout your application. Alas, life could be so easy if customers were satisfied with that scenario. But they want to be able to customize their application (preferably without the involvement of a developer) to fit the corporate identity, use established vocabulary etc. pp.

Hard-wired static classes just don’t fit those customer requirements. But I believe they are a very good first step on the way if you could just tweak them a bit.

Unfortunately you cannot do anything to make those classes any more flexible. They are not partial. The creation of and the access to the ResourceManager is out of your reach. You get what the tools give you. Nothing more and nothing less. As I did not want to go through the painful process of writing a custom extension for VS I looked for a different approach. And found the Text Template Transformation Toolkit (T4).

I even found a tutorial on CodeProject that explains how to do what I wanted to achieve. I just wanted my result to look a little different.

What were my requirements?

  • Static properties for convenient access to the localized resources
  • Allow to replace the ResourceManager by editing the template
  • Allow to replace the ResourceManager at runtime via settable property
  • Avoid the complexity of overwriting the virtual methods of ResourceManager

Lets start from the last requirement. The ResourceManager class has way to much functionality for my liking even if most of that functionality seems to be accessible to derived types via virtual methods. So I decided to create a very simple interface that contains the only method that is used by the generated resource classes.

public interface IResourceManager
{
  string GetString(string name, CultureInfo culture);
}

An equally simple adapter from interface to the framework class allows me to use the default ResourceManager.

public class ResourceManagerWrapper : IResourceManager
{
  private readonly ResourceManager resourceManager;

  public ResourceManagerWrapper(ResourceManager resourceManager)
  {
    this.resourceManager = resourceManager;
  }

  public string GetString(string name, CultureInfo culture)
  {
    return this.resourceManager.GetString(name, culture);
  }
}

I added a setter to the static ResourceManager property and changed its type to IResourceManager. And finally I added a fallback to the default ResourceManager in case someone sets the property to null. The result is functionally identical to the code that VS generates but looks less generated. But that’s a matter of taste. Below is a sample of the generated output.

public class Labels
{
  private static IResourceManager resourceManager;
  private static CultureInfo resourceCulture;

  [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
  public static IResourceManager ResourceManager
  {
    get
    {
      if(resourceManager == null)
      {
        IResourceManager temp = new ResourceManagerWrapper(new ResourceManager("Main.Assets.Resources.Labels", typeof(Labels).Assembly));
        resourceManager = temp;
      }
      return resourceManager;
    }
    set
    {
      resourceManager = value;
    }
  }
  [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
  public static CultureInfo Culture
  {
    get
    {
      return resourceCulture;
    }
    set
    {
      resourceCulture = value;
    }
  }
  public static string MyMultiLanguageString
  {
    get { return ResourceManager.GetString("MyMultiLanguageString", resourceCulture ?? CultureInfo.CurrentUICulture); }
  }
}

That was the boring part I guess. The actual template file being the more interesting one.

//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version:4.0.30319.34003
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------
<#@ template hostspecific="true" language="C#" #>
<#@ output extension=".Designer.cs" #>
<#@ assembly name="EnvDTE" #>
<#@ assembly name="System.IO" #>
<#@ assembly name="System.Xml" #>
<#@ assembly name="System.Xml.Linq" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Xml.Linq" #>

namespace Main.Assets.Resources
{
    using System;
    using System.ComponentModel;
    using System.Globalization;
    using System.Resources;
    using System.Threading;
    using Infrastructure.I18n;

    public class Labels
    {
        private static IResourceManager resourceManager;
        private static CultureInfo resourceCulture;

        [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
        public static IResourceManager ResourceManager
        {
            get
            {
                if(resourceManager == null)
                {
                    IResourceManager temp = new ResourceManagerWrapper(new ResourceManager("Main.Assets.Resources.Labels", typeof(Labels).Assembly));
                    resourceManager = temp;
                }

                return resourceManager;
            }

            set
            {
                resourceManager = value;
            }
        }

        [EditorBrowsableAttribute(EditorBrowsableState.Advanced)]
        public static CultureInfo Culture
        {
            get
            {
                return resourceCulture;
            }

            set
            {
                resourceCulture = value;
            }
        }
<#
    string resxFileName = this.Host.TemplateFile.Replace(".tt", ".resx");
    XDocument doc = XDocument.Load(resxFileName);

    if(doc != null && doc.Root != null)
    {
        foreach(XElement x in doc.Root.Descendants("data"))
        {
            string name = x.Attribute("name").Value;
            WriteLine(string.Empty);
            WriteLine("        public static string " + name);
            WriteLine("        {");
            WriteLine("            get { return ResourceManager.GetString(\"" + name + "\", resourceCulture ?? CultureInfo.CurrentUICulture); }");
            WriteLine("        }");
        }
    }
#>
    }
}

One thing to notice is that the namespace for the generated file is hardcoded in the template. This has several reasons.

While T4 is an awesome idea, MS managed to mess it up (as usual you might add). There is a VS implementation of ITextTemplatingHostEngine (which allows you to debug your template from the solution explorer. +1 for that feature!) and one for MSBuild (which does not support debugging as far as I found out). Those implementations are your entry point into the VS API from inside a template. As you might expect: They have just enough subtle differences to screw you up… Getting the correct namespace for a file is just one of them.

Another thing is that I decided to go with one template per *.resx file. You can enhance the template and locate all input files for code generation in your solution and compute the output destination and… I wanted a simple solution so I did not go that route.

And last but not least: I use ReSharper’s Multifile Templates to create new *.resx files. That gives me the resources and the templates in a single step. ReSharper has a macro to insert the default namespace of a project into the template file. I have a convention for the folder structure in which I place resource files so I can just append that to the namespace an be done with it.

And the really, really last thing I want to mention before I finish this post: You can hook up the generation process of your templates to MSBuild (which is the reason I noticed the differences between the VS and the MSBuild template engine). That way you can be sure that your build always uses the latest generated resource code and avoid the error prone step of manually calling “Run custom tool” after you made changes to a *.resx files. Oleg Sych collected a lot of information related to T4 in his blog.

Behavioral Testing

Jimmy Bogard recently gave a presentation at the NDCOslo about a testing strategy he calls Holistic Testing. Have a look at the video, it’s worth your time!

Among a lot of other things he talked about why he doesn’t use mocking frameworks (or hand-crafted mock objects) very often in his tests. According to Jimmy, testing with mocks tends to couple your tests to the implementation details of your code. Instead he prefers to test the behavior of his code and ignore said implementation as far as possible.

While I don’t agree with his sample code I totally agree with his statement. But let’s have a look.

[Theory, AutoData]
public void DummyTest(Customer customer)
{
  var calculator = A.Fake&lt;ITaxCalculator&gt;();
  A.CallTo(() =&gt; calculator.Calculate(customer)).Returns(10);
  var factory = new OrderFactory(calculator);
  var order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

This code snippet uses xUnit’s Theories for data driven tests along with AutoFixture’s extension to that feature which “deterministically creates random test data”. AutoFixture creates the Customer object. Then Jimmy uses FakeItEasy to create a mock for the ITaxCalculator, setup the return value for the call to it’s Calculate() method. Passes the calculator to the constructor of the OrderFactory lets the factory create an Order and asserts that the correct tax is set on the order. Nothing spectacular so far (if you are already familiar with data Theories and AutoFixture that is…).

But what happens if the factory no longer uses a calculator? Or if we add another parameter to the constructor? After all the constructor is just an implementation detail that we don’t want to care about. But these changes will break our test code. And this is where Jimmy gets fancy.

Assuming that we already build our code according to the Dependency Injection Pattern and use a DI container in our project (that’s quite some assumption…), we already have the information about how the factory is assembled and which calculator (if any) is to be used in the configuration of said container. So why not use the container to provide us with the factory instead of new’ing it up ourselves?

I took the liberty of streamlining Jimmy’s test a bit more by letting AutoFixture create the customer as well. The ContainerDataAttribute is derived from AutoFixture’s AutoDataAttribute. My sample uses Unity instead of StructureMap but it works all the same. In contradiction to Jimmy’s statement (around 45:46 in the video) Unity does support the creation child containers. Although I have to admit that this is the first scenario where this feature makes sense to me. But that is a different story I’ll save for another day.

[Theory, ContainerData]
public void DummyTest(OrderFactory factory, Customer customer)
{
  Order order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

Wow. That’s what I call straight to the point. You don’t care about the creation process of your system-under-test (the factory) or your test data (the customer). You just care about the behavior which is to set the correct tax rate on your order object.
As the tax rate differs from one state to another it might not make sense to test for that specific value without controlling its calculation to some extent (e.g. via a mock object…) but that’s a common problem with HelloWorld!-samples and does not diminish the brilliancy of the general concept.

So let’s see how the ContainerDataAttribute is implemented.

public class ContainerDataAttribute : AutoDataAttribute
{
  public ContainerDataAttribute()
    : base(new Fixture().Customize(
      new ContainerCustomization(
        new UnityContainer().AddExtension(
          new ContainerConfiguration()))))
  {
  }
}

We derive from AutoFixture’s AutoDataAttribute which does the heavy lifting for us (like integrating with xUnit, creating random test data, …). We call the base class’ constructor handing it a customized Fixture object (that is AutoFixture’s lynchpin for creating test data). The customization receives a preconfigured UnityContainer instance as a parameter. As you might have guessed UnityContainer is the Unity DI container. Unity’s configuration system is not as advanced as that of most other containers. Especially it does not bring a dedicated way to package configuration data (like StructureMap’s Registry for example). But you can (ab)use the UnityContainerExtension class to achieve the same result. Just place your configuration code inside your implementation of the abstract Initialize() method and add the extension to the container.

public class ContainerConfiguration : UnityContainerExtension
{
  protected override void Initialize()
  {
    this.Container.RegisterType&lt;IFoo, Foo&gt;();
    this.Container.RegisterType&lt;ITaxCalculator, DefaultTaxCalculator&gt;();
  }
}

It makes sense to re-use as much of your production configuration as possible. But you should consider to modify it in places where the tests might interfere with actual production systems (like sending emails, modifying production databases etc.).

The customization to AutoFixture hooks up two last-chance handlers for creating objects by adding them to IFixture.ResidueCollectors.

public class ContainerCustomization : ICustomization
{
  private readonly IUnityContainer container;
  public ContainerCustomization(IUnityContainer container)
  {
    this.container = container;
  }
  public void Customize(IFixture fixture)
  {
    fixture.ResidueCollectors.Add(new ChildContainerSpecimenBuilder(this.container));
    fixture.ResidueCollectors.Add(new ContainerSpecimenBuilder(this.container));
  }
}

AutoFixture calls these object creators SpecimenBuilders. The first one we hook up is responsible for creating a child container if a test requires an IUnityContainer as a method parameter. The second actually uses the container to create objects.

public class ChildContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ChildContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null || type != typeof(IUnityContainer))
    {
      return new NoSpecimen();
    }
    return this.container.CreateChildContainer();
  }
}
public class ContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null)
    {
      return new NoSpecimen();
    }
    return this.container.Resolve(type);
  }
}

The NoSpecimen class is AutoFixture’s way of telling its kernel that this builder can’t construct an object for the current request. Something like a NullObject.

Well and that’s it. A couple of dozen lines of code. A superior testing framework (xUnit). A convenient way of creating test data (AutoFixture). A DI container (which should be mandatory for any project if you ask me…). And writing maintainable tests for the correct behavior of your code becomes a piece of cake. I love it! 🙂

You can find the sample code on my playground on CodePlex (solution TecX.Playground, project TecX.BehavioralTesting).

Book Discount Kata

Long time no see. About two months without anything interesting (related to dev topics at least) happening.

Recently I had a look at some of the katas at Coding Dojo. Quite interesting stuff. Today I want to present my shot at the Harry Potter book discount kata.

First things first: I used xUnit and especially xUnit’s data theories for TDD’ing my solution. I took a leave out of the kata’s book and used simple integer arrays to represent the shopping basket. I started with the simplest possible implementation. One class (DiscountCalculator) with a single method (Calculate(int[])). But well … that didn’t get me too far. Basically it solved the problem up to “we have two different books, how much is that?” before I decided that I hate the resulting code.

So I leaned back and thought about the problem a bit more. What I needed was a way to find subsets of the books inside the shopping basket that would maximize the discount. Some kind of partitioning algorithm. After a little back and forth I chose to implement that algorithm as a combination of three simple steps:

  1. If the basket is empty, you are done and no more partitioning is needed.
  2. If you have 8 books left in the basket and you can form two partitions with 4 distinct books each, you should prefer that to a 5/3 partition.
  3. Take as many distinct books as possible from the basket.

This is the code for those three steps:

public class EmptyBasket : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    if (context.Basket.Length == 0)
    {
      context.Finished = true;
    }
  }
}
public class Prefer44To53 : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    if (context.Basket.Length == 8)
    {
      List<int> basketCopy = new List<int>(context.Basket);
      int[] part1 = context.Basket.Distinct().Take(4).ToArray();
      if (part1.Length == 4)
      {
        foreach (int book in part1)
        {
	      basketCopy.Remove(book);
        }
        int[] part2 = basketCopy.Distinct().ToArray();
        if (part2.Length == 4)
        {
          context.MakePartition(part1);
          context.MakePartition(part2);
        }
      }
    }
  }
}
public class GreedyGrabDistinctBooks : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    int[] differentBooks = context.Basket.Distinct().ToArray();
    if (differentBooks.Length > 0)
    {
      context.MakePartition(differentBooks);
    }
  }
}

Admittedly the implementation of the 4/4 rule could use some polishing. But it works for now.

To host those steps I used a variation of the chain-of-responsibility pattern. This chain would loop through the different steps. Each step would take some of the books from the basket and put them in a list of partitions until there are no books left. The order of the steps is important! To achieve the desired outcome of the “prefer 4/4 partition to 5/3 partition” rule you need to take those books from the basket before the greedy “take as many distinct books as possible” rule applies. I chose to remove both 4/4 chunks from the basket in step 2 to reduce the overhead of the calls to Distinct().

public class PartitionerChain
{
  private readonly List<IBasketPartitioner> partitioners;
  public PartitionerChain(params IBasketPartitioner[] partitioners)
  {
    this.partitioners = new List<IBasketPartitioner>(partitioners);
  }
  public IEnumerable<int[]> GetPartitions(int[] originalBasket)
  {
    var context = new PartitioningContext(originalBasket);
    int index = 0;
    do
    {
      this.partitioners[index].Partition(context);
      index = (index + 1) % this.partitioners.Count;
    }
    while (!context.Finished && context.Basket.Length > 0);
    return context.Partitions;
  }
}

The chain hands a context from step to step which contains the current content of the shopping basket, a list of partitions and a flag that indicates when the partitioning process is finished.

public class PartitioningContext
{
  private readonly List<int[]> basketPartitions;
  public PartitioningContext(int[] originalBasket)
  {
    this.Basket = originalBasket;
    this.basketPartitions = new List<int[]>();
  }
  public int[] Basket { get; private set; }
  public bool Finished { get; set; }
  public IEnumerable<int[]> Partitions { get { return this.basketPartitions; } }
  public void MakePartition(int[] partition)
  {
    this.basketPartitions.Add(partition);
    List<int> newBasket = new List<int>(this.Basket);
    foreach (int book in partition)
    {
      newBasket.Remove(book);
    }
    this.Basket = newBasket.ToArray();
  }
}

To calculate the actual price of the books I switched from the switch-case solution to (yet again) a chain-of-responsibility based one.

public abstract class DiscountStrategy
{
  public DiscountStrategy Next { get; protected set; }
  public abstract double GetPrice(int[] basket);
}
public class NoDiscount : DiscountStrategy
{
  public override double GetPrice(int[] basket)
  {
    return basket.Sum(book => 8.0);
  }
}
public class TwoBooks : DiscountStrategy
{
  public TwoBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 2)
    {
      return 2 * 8 * 0.95;
    }
    return this.Next.GetPrice(basket);
  }
}
public class ThreeBooks : DiscountStrategy
{
  public ThreeBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 3)
    {
      return 3 * 8 * 0.9;
    }
    return this.Next.GetPrice(basket);
  }
}
public class FourBooks : DiscountStrategy
{
  public FourBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 4)
    {
      return 4 * 8 * 0.8;
    }
    return this.Next.GetPrice(basket);
  }
}    
public class FiveBooks : DiscountStrategy
{
  public FiveBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 5)
    {
      return 5 * 8 * 0.75;
    }
    return this.Next.GetPrice(basket);
  }
}

[OT] Did I mention that I LOVE the chain-of-responsibility pattern? It is super flexible. It allows for clear separation of concerns. Favors small, easy to understand (and test) classes. Changing the behavior of your solution becomes a simple matter of reordering steps that you have already implemented. [/OT]

By this you can easily swap out different discount rules.

After that the calculator was a rather dumb shell. It assembles the two chains in its constructor. This can be seen as a violation of the D(ependency Inversion) of SOLID software development. I chose to encapsulate the knowledge of how to order the different pieces in the chains there nonetheless. If I ever need to make that step configurable, it would be a no-brainer as the assignment already happens in the calculator’s constructor.

All the calculator has to do now is to let the partioners divide the shopping basket into handy pieces and then let the discount strategies calculate the price for the individual chunks. Sweet!

public class DiscountCalculator
{
  private readonly PartitionerChain partitioners;
  private readonly DiscountStrategy discounts;
  public DiscountCalculator()
  {
    this.partitioners = new PartitionerChain(new EmptyBasket(), new Prefer44To53(), new GreedyGrabDistinctBooks());
    this.discounts = new FiveBooks(new FourBooks(new ThreeBooks(new TwoBooks(new NoDiscount()))));
  }
  public double Calculate(int[] basket)
  {
    var partitions = this.partitioners.GetPartitions(basket);
    double total = partitions.Sum(partition => this.discounts.GetPrice(partition));
    return total;
  }
}

One more word to the testing aspect. I mentioned that I used xUnit data theories. With these it was almost effortless to use the test cases described at the bottom of the kata.

[Theory]
[InlineData(0d, new int[0])]
[InlineData(8d, new[] { 0 })]

// ...

[InlineData(2 * 8 * 4 * 0.8, new[] { 0, 0, 1, 1, 2, 2, 3, 4 })]
[InlineData(3 * 8 * 5 * 0.75 + 2 * 8 * 4 * 0.8, new[]
                                                {
                                                  0, 0, 0, 0, 0,
                                                  1, 1, 1, 1,1,
                                                  2, 2, 2, 2,
                                                  3, 3, 3, 3, 3,
                                                  4, 4, 4, 4
})]
public void CalculatesCorrectPrice(double expected, int[] basket)
{
  DiscountCalculator sut = new DiscountCalculator();
  Assert.Equal(expected, sut.Calculate(basket));
}

It was never ever that easy to setup tests that use different data but are equivalent otherwise. If you are interested in data theories and how they can make your life as a tester so much easier I strongly recommend that you have a look at Mark Seemann’s awesome series of posts about his implementation of the String Calculator kata. Mind blowing!

So that’s it for now. I think I will have a look at the other katas. Hope they are as much fun 🙂

Turn back time

Have you ever wished that you were able to go back in time? At least in software development that is relatively easy.

If you ask someone how you could simulate the passing of time for a unit test you might get an answer that involves TypeMock Isolator or something even more wicked. But you can solve that problem with less than a hundred lines of code, once and for all.

In his book Dependency Injection in .NET Mark Seemann introduced a small helper class he calls TimeProvider. It is used as a sample of an Ambient Context and like the DateTime structure it offers some static properties to access the current local time or the current UTC time.

In a post on his blog Seemann shows how descriptive testing with the TimeProvider API can become.

Since I first read about the TimeProvider I used it countless times and it made my life a lot (!!!) easier. I made a few optimizations to reduce the friction of the original code.

public class TimeProvider
{
  private static TimeProvider current;
  static TimeProvider()
  {
    current = new DefaultTimeProvider();
  }
  public static TimeProvider
  {
    get { return current; }
    set
    {
      Guard.AssertNotNull(value, "Current");
      current = value;
    }
  }
  public static DateTime Now
  {
    get { return Current.GetNow(); }
  }
  public static DateTime UtcNow
  {
    get { return Current.GetUtcNow(); }
  }
  protected abstract DateTime GetNow();
  protected abstract DateTime GetUtcNow();
}

This is the default implementation:

public class DefaultTimeProvider : TimeProvider
{
  protected override GetNow()
  {
    return DateTime.Now;
  }
  protected override GetUtcNow()
  {
    return DateTime.UtcNow;
  }
}

And because Moq can’t mock protected methods I wrote an almost as simple implementation for unit testing

public class MockTimeProvider : TimeProvider
{
  private readonly DateTime now;
  private readonly DateTime utcNow;
  public MockTimeProvider(DateTime now)
    : this(now, now)
  {
  }
  public MockTimeProvider(DateTime now, DateTime utcNow)
  {
    this.now = now;
    this.utcNow = utcNow;
  }
  protected override GetNow()
  {
    return this.now;
  }
  protected override GetUtcNow()
  {
    return this.utcNow;
  }
}

I had to change the implementation of the Freeze method from the sample a bit but nothing to worry:

internal static void Freeze(this DateTime dt)
{
    var timeProviderStub = new MockTimeProvider(dt);
    TimeProvider.Current = timeProviderStub;
}

It won’t help you in legacy systems* or when dealing with 3rd party components. But it really helps when testing your own code!

* Unless you replace all calls to DateTime.Now or UtcNow which we did on a project I worked with last year.

Pipes and Filters

Pipes and filters (or just pipeline) is another common pattern. Oren Eini and Jeremy Likness both have very interesting posts about it on their respective blogs.

While Jeremy’s post aims at the slick creation of pipelines, Oren talks more about the pattern itself and how to implement it in a specific manner (using input and output values of Type IEnumerable<T>).

Some interesting argument came up in the comments to Oren’s post. “Why not use LINQ instead of your custom pipeline (framework)?”

I won’t repeat all the pros and cons here. Better read the posts yourself, they are worth your time!

My opinion on the topic: LINQ is a great tool. But I believe it’s neither the only solution for chains of queries and transformations, nor is it the best in all cases.

I will pick up one point from the “pro LINQ” point of view. “With LINQ you can have different input and output types.”

Sure you can. But who says you can’t do that with pipes and filters just as easily?

We define an abstract base class Filter<TIn, TOut>

public abstract class Filter<TIn, TOut>
{
  public abstract IEnumerable<TOut> Process(IEnumerable<TIn> input);
  public Filter<TIn, TNext> Pipe<TNext>(Filter<TOut, TNext> next)
  {
    return new Pipe<TIn, TOut, TNext>(this, next);
  }
}

and a derived class Pipe<TIn, T, TOut>

public class Pipe<TIn, T, TOut> : Filter<TIn, TOut>
{
  private readonly Filter<TIn, T> source;
  private readonly Filter<T, TOut> destination;
  public Pipe(Filter<TIn, T> source, Filter<T, TOut> destination)
  {
    this.source = source;
    this.destination = destination;
  }
  public override sealed IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    var x = this.source.Process(input);
    var result = this.destination.Process(x);
    return result;
  }
}

With these two as a base we can easily chain filters with different input and output types.

We can also fine-tune the ends of the pipeline a bit so that the code is a nicer read. With a small extension method and a just as small dummy class we can use any enumerable as the starting point for defining a pipeline.

public static Filter<TIn, TOut> Pipe<TIn, TOut>(this IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
{
  return new PipelineStartDummy<TIn, TOut>(enumerable, filter);
}
private class PipelineStartDummy<TIn, TOut> : Filter<TIn, TOut>
{
  private readonly IEnumerable<TIn> enumerable;
  private readonly Filter<TIn, TOut> filter;
  public PipelineStartDummy(IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
  {
    this.enumerable = enumerable;
    this.filter = filter;
  }
  public override IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    return this.filter.Process(this.enumerable);
  }
}

If we don’t care what comes out of the pipeline and just want to start processing values from the source we can use another extension method that encapsulates Oren’s enumerator magic.

public static void Start<TIn, TOut>(this Filter<TIn, TOut> filter)
{
  var enumerable = filter.Process(null);
  var enumerator = enumerable.GetEnumerator();
  while (enumerator.MoveNext()) { }
}

And now we put it all together and get the following:

new Numbers(3).Pipe(new Square()).Pipe(new Printer()).Start();

Numbers just returns integer values between 1 and the constructor parameter. Square squares all input values and the Printer writes the input to the console. The call to Start() starts the processing.

While it is not the most impressive example implementation of the pipes and filters pattern I believe that it demonstrates how powerful and flexible the pattern can be. And the code is still readable and very explicit about what you are doing. With just a few lines of code you have a composable, easy to understand solution where you can recombine filters in different orders to change the behavior of the pipeline. And you can do just about anything inside of those filters (think validation or enriching the values traversing through the pipeline with data from services or persistent storage). You can even change the type of the values you are processing between steps. This is yet another case of “Like it a lot!”

Testing and databases

I prefer unit tests over integration tests any time. But at some point you just can’t avoid the latter. You need your components to hit a database to verify that your queries are correct and that you update the right records.

Some databases (like RavenDB for example, see Ayende’s answer) support running completely in-memory specifically for testing scenarios. If you are in the convenient situation to use one of them you can spin up a db instance, fill it with data, run your tests and throw the instance away. Otherwise you have to find another way to prepare your materialized database to offer a well defined set of records.

There are at least two ways to do that: Have a designated test database. Run each test inside a transaction and rollback after the test. Your database should be returned to the original state it was in before the test. Or you can drop the database after each test and recreate it from scratch with the data needed for each test.

I want to talk about the second approach. I wrote a small helper class for a task I’m currently working on. It allows you to define some setup steps (like picking a connection string or selecting the name of the database to create for the test) and a build-up sequence (drop existing db, create empty db, create empty tables etc.). And to make using it nice and easy it automatically drops the created database at the end of the test run.

[Fact]
public void Should_CreateAndDropDatabase()
{
  using (var result = Database.Build(
    x =>
      {
        x.WithConnectionStringNamed("MyConnectionString");
        x.WithDatabaseName("FooDatabase");
        x.BuildSequence(
          db =>
            {
              db.DropExistingDatabase();
              db.CreateEmptyDatabase();
              db.CreateTables();
              db.CreateTestData();
            });
      }))
  {
    Assert.Null(result.Error);

    // ... run your actual test code
  }
}

Running this test writes some basic information about what’s going on to the console. But you can easily use some other means for tracing.

Retrieving ConnectionString named ‘MyConnectionString’.
Using database name ‘FooDatabase’.
Droping database ‘FooDatabase’.
Creating empty database ‘FooDatabase’.
Creating tables.
Creating test data.
Droping database ‘FooDatabase’.

How is it working? We start from a static entry point (similar to TopShelf’s HostFactory):

public static class Database
{
  public static DatabaseBuildResult Build(Action<DatabaseBuildConfiguration> action)
  {
    Guard.AssertNotNull(action, "action");
    var config = new DatabaseBuildConfiguration();
    action(config);
    return config.Execute();
  }
}

From there we make calls to DatabaseBuildConfiguration

public class DatabaseBuildConfiguration
{
  private readonly AppendOnlyCollection<Action> actions;
  private bool dontDropDatabaseOnDispose;
  public DatabaseBuildConfiguration()
  {
    this.actions = new AppendOnlyCollection<Action>();
  }
  public IAppendOnlyCollection<Action> Actions { get { return this.actions; } }
  public string ConnectionString { get; private set; }
  public string Database { get; private set; }
  
  public DatabaseBuildResult Execute()
  {
    Action onDispose = this.dontDropDatabaseOnDispose ? () => { } : this.GetDropDatabaseAction();
    var result = new DatabaseBuildResult(onDispose);
    try
    {
      foreach (Action action in this.Actions)
      {
        action();
      }
    }
    catch (Exception ex)
    {
      result.Error = ex;
    }
    return result;
  }
  
  // ... more methods
  
  public void WithConnectionStringNamed(string connectionStringName)
  {
    Guard.AssertNotEmpty(connectionStringName, "connectionStringName");
    Action action = () =>
      {
        Console.WriteLine("Retrieving ConnectionString named '{0}'.", connectionStringName);
        this.ConnectionString =
          ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
      };
    this.Actions.Add(action);
  }
  public void BuildSequence(Action<DatabaseBuildSequenceConfiguration> action)
  {
    var config = new DatabaseBuildSequenceConfiguration(this);
    action(config);
  }
}

This class is basically a container for a list of actions. The methods on the class place actions in the list. Execute() adds some error handling and runs these actions.

DatabaseBuildSequenceConfiguration defines the actions used to manipulate the database. I won’t show it’s code here as it works in the same way as DatabaseBuildConfiguration.

In the sample at the beginning of this post you can see a using-block surrounding the actual test code. The DatabaseBuildResult implements IDisposable. By default disposing the result will also drop the entire database. But you can turn off that behavior by calling DontDropDatabaseOnDispose().

You can add more methods for e.g. running sql scripts to generate or manipulate data or hookup ready to load database files. The model is quite easy to extend.

It’s surely not as good as having a fully functional (and functionally equivalent!) in-memory database but for a reasonably complex database it makes my live a lot easier!