Testing custom StyleCop rules

StyleCop is the preferred way to enforce consistency in your project’s coding style. Code guidelines in printed or electronic form are always inferior to automatic validation of said guidelines as part of your build process. You get compiler warnings or errors with (hopefully) helpful and descriptive explanations of why what you did is a bad idea. StyleCop even contains a ReSharper plugin that adds quick-fixes for many rule violations.

But StyleCop can do more than just notify you that someone forgot an empty line after a closing curly bracket. As a proof of concept I wrote rules that should prevent faulty re-throws (catch-blocks in the form of catch(Exception ex) { throw ex; }) that swallow the StackTrace or constructors with more than 4 parameters (which is considered a code smell that indicates a component has too many responsibilities).

The point of this post however is not to show you how to write such rules but how you can make sure that they really work by testing them as part of your regular unit testing process. If you are looking for samples on how to write custom rules check out this post or use the search engine of your choice. There are lots of samples.

Posts like this one explain how to setup tests for custom StyleCop rules using the Visual Studio test-runner and VS’ Test View. In my opinion that thing gives you the worst possible user-experience. It makes running your tests as convenient as reading a paper about quantum physics. ReSharper or TestDriven.Net are way more easy to handle. But the given examples don’t work properly with those tools or alternative test-runners (e.g. NUnit or xUnit). I love my tools and I hate to be forced to use something inferior by circumstances that I can influence. So I set out to remove that particular bump in the road.

I did not want to fumble with files and how and where to deploy them for test-runs with different test frameworks or tools. Instead I wanted to make these files embedded resources that are just there when I run my tests. I started with the sample mentioned above and made several changes to it.

  1. Use the StyleCopObjectConsole instead of the StyleCopConsole.
  2. Create a class derived from SourceCode that handles loading code from embedded resources (I called mine ResourceBasedSourceCode. Samples for that can be found in the StyleCop source code. Search for StringBasedSourceCode).
  3. The StyleCopObjectConsole requires an instance of ObjectBasedEnvironment which in turn requires a delegate that creates SourceCode instances for given file names. Make that delegate create instances of your ResourceBasedSourceCode.
  4. Hook up the handlers for output and violations to the events of StyleCopObjectConsole.Core instead of the events of the console object.

Lets have a look at the source code of the tests for which I use xUnit.

public class CodeQualityAnalyzerFixture
{
  private readonly StyleCopObjectConsole console;
  private readonly ICollection output;
  private readonly ICollection violations;
  private readonly CodeProject codeProject;

  public CodeQualityAnalyzerFixture()
  {
    this.violations = new List<string>();
    this.output = new List<string>();
    ObjectBasedEnvironment environment = new ObjectBasedEnvironment(ResourceBasedSourceCode.Create, GetProjectSettings);
    ICollection<string> addinPaths = new[] { "." };
    this.console = new StyleCopObjectConsole(environment, null, addinPaths, true);
    CsParser parser = new CsParser();
    parser.FileTypes.Add("CS");
    this.console.Core.Environment.AddParser(parser);
    this.console.Core.ViolationEncountered += (s, e) =&gt; this.violations.Add(e.Violation);
    this.console.Core.OutputGenerated += (s, e) =&gt; this.output.Add(e.Output);
    this.codeProject = new CodeProject(0, null, new Configuration(null));
  }

  [Fact]
  public void Should_Flag_TooManyCtorParameters()
  {
    this.AnalyzeCodeWithAssertion("TooManyConstructorArguments.cs", 1);
  }

  [Fact]
  public void Should_Flag_IncorrectRethrow()
  {
    this.AnalyzeCodeWithAssertion("IncorrectRethrow.cs", 1);
  }

  private Settings GetProjectSettings(string path, bool readOnly)
  {
    return null;
  }

  private void AnalyzeCodeWithAssertion(string codeFileName, int expectedViolations)
  {
    this.AddSourceCode(codeFileName);
    this.StartAnalysis();
    this.WriteViolationsToConsole();
    this.WriteOutputToConsole();
    Assert.Equal(expectedViolations, this.violations.Count);
  }

  private void WriteOutputToConsole()
  {
    foreach (var o in this.output)
    {
      Console.WriteLine(o);
    }
  }

  private void WriteViolationsToConsole()
  {
    foreach (var violation in this.violations)
    {
      Console.WriteLine(violation.Message);
    }
  }

  private void AddSourceCode(string resourceFileName)
  {
    bool sourceCodeSuccessfullyLoaded = this.console.Core.Environment.AddSourceCode(this.codeProject, resourceFileName, null);
    if (sourceCodeSuccessfullyLoaded == false)
    {
      throw new InvalidOperationException(string.Format("Source file '{0}' could not be added.", resourceFileName));
    }
  }

  private void StartAnalysis()
  {
    IList projects = new[] { this.codeProject };
    this.console.Start(projects);
  }
}

What does it do? Lets start with the test setup in the constructor.

  1. Initialize the collections for the output generated by StyleCop.
  2. Instantiate the ObjectBasedEnvironment providing it with the required delegates to create SourceCode and Settings. The SourceCodeFactory delegate is a static method of my ResourceBasedSourceCode class so we will come to that in a moment. The ProjectSettingsFactory delegate always returns null.
  3. Define a path to the add-in under test (which is the assembly that contains our custom rules). So the dot stands for the “current working folder” where said add-in will be copied to by the build by default.
  4. Create the console object and add a CsParser to the console. As the name indicates it is responsible for parsing C# code files. Don’t forget to register the file extension for C# code files when you create the parser or it just won’t work. The extension name is case-sensitive. Use upper-case.
  5. Register the eventhandlers that will write output and rule violations to the respective collections that will be evaluated after StyleCop is done processing your code files.
  6. Create a new CodePoject. The code files you want analyzed will be added to that project during your tests.

The test methods are simple. They call the AnalyzeCodeWithAssertion method with the name of the file you want to analyze and the number of violations you expect. This is the most basic validation possible. You can make it arbitrarily complex if you want to (e.g. by asserting the type of the expected violation or message texts or …).

Note that I made my ResourceBasedSourceCode class handle incomplete file names. I didn’t want to have to deal with all the namespaces that are added as prefixes to embedded resources so I decided to go with “EndsWith” semantics. As long as the name of the resource ends with the file name provided as a parameter it works just fine.

The AnalyzeCodeWithAssertion method is responsible for

  1. Adding the source code to the project that will be analyzed.
  2. Running the actual analysis using StyleCop.
  3. Writing the output generated during the analysis to the console so you can explore it at leisure after the test is run.
  4. Asserting your expectations.

Nothing spectacular there. Now lets have a look at the code for the ResourceBasedSourceCode.

public class ResourceBasedSourceCode : SourceCode
{
  private readonly string _ResourceName;
  public ResourceBasedSourceCode(CodeProject project, SourceParser parser, string resourceName)
    : this(project, parser, resourceName, Assembly.GetExecutingAssembly())
  {
  }
  public ResourceBasedSourceCode(CodeProject project, SourceParser parser, string resourceName, Assembly assembly)
    : base(project, parser)
  {
    Guard.AssertNotEmpty(resourceName, "resourceName");
    Guard.AssertNotNull(assembly, "assembly");
    string fullResourceName = assembly.GetManifestResourceNames()
	  .FirstOrDefault(rn => rn.EndsWith(resourceName, StringComparison.OrdinalIgnoreCase));
	  
    if (string.IsNullOrEmpty(fullResourceName))
    {
      throw new ArgumentException(string.Format("No resource has a name that ends with '{0}'", resourceName), "resourceName");
    }
    _ResourceName = fullResourceName;
  }

  public static SourceCode Create(string path, CodeProject project, SourceParser parser, object context)
  {
    return new ResourceBasedSourceCode(project, parser, path);
  }

  public override TextReader Read()
  {
    Stream stream = this.GetType().Assembly.GetManifestResourceStream(_ResourceName);
    return new StreamReader(stream);
  }

  public override bool Exists
  {
    get { return true; }
  }

  public override string Name
  {
    get { return _ResourceName; }
  }

  public override string Path
  {
    get { return _ResourceName; }
  }

  public override DateTime TimeStamp
  {
    get { return DateTime.MinValue; }
  }

  public override string Type
  {
    get { return "cs"; }
  }
}

There is very little magic involved in this class. In the constructor we try to find an embedded resource matching the name that is provided as a parameter and store the full name of the resource in a field. If that step doesn’t fail we can safely assume that we can open a Stream to read that resource later on. The file extension is hard-coded to C# files (property Type). Lower-case for the extension name is OK here. Don’t ask, I have no clue why the developers decided to make that different from the behavior of the CsParser. The Create method is a factory method with the signature of the SourceCodeFactory delegate as required by the ObjectBasedEnvironment. Everything else is implemented in the most basic way to make the analysis pass.

Now just add a folder that contains the code files to your test project (I called it “Resources”). Make sure to set the files’ Build Action to “Embedded Resource”. And you are ready to roll.

That’s all the infrastructure you need. Doesn’t look that complicated, does it? If you are interested in the complete source code you can find that here. The project names are TecX.Build and TecX.Build.Test. Have fun!

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

A New York City subway train holds 1,200 people. This blog was viewed about 6,200 times in 2013. If it were a NYC subway train, it would take about 5 trips to carry that many people.

Click here to see the complete report.

Behavioral Testing

Jimmy Bogard recently gave a presentation at the NDCOslo about a testing strategy he calls Holistic Testing. Have a look at the video, it’s worth your time!

Among a lot of other things he talked about why he doesn’t use mocking frameworks (or hand-crafted mock objects) very often in his tests. According to Jimmy, testing with mocks tends to couple your tests to the implementation details of your code. Instead he prefers to test the behavior of his code and ignore said implementation as far as possible.

While I don’t agree with his sample code I totally agree with his statement. But let’s have a look.

[Theory, AutoData]
public void DummyTest(Customer customer)
{
  var calculator = A.Fake&lt;ITaxCalculator&gt;();
  A.CallTo(() =&gt; calculator.Calculate(customer)).Returns(10);
  var factory = new OrderFactory(calculator);
  var order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

This code snippet uses xUnit’s Theories for data driven tests along with AutoFixture’s extension to that feature which “deterministically creates random test data”. AutoFixture creates the Customer object. Then Jimmy uses FakeItEasy to create a mock for the ITaxCalculator, setup the return value for the call to it’s Calculate() method. Passes the calculator to the constructor of the OrderFactory lets the factory create an Order and asserts that the correct tax is set on the order. Nothing spectacular so far (if you are already familiar with data Theories and AutoFixture that is…).

But what happens if the factory no longer uses a calculator? Or if we add another parameter to the constructor? After all the constructor is just an implementation detail that we don’t want to care about. But these changes will break our test code. And this is where Jimmy gets fancy.

Assuming that we already build our code according to the Dependency Injection Pattern and use a DI container in our project (that’s quite some assumption…), we already have the information about how the factory is assembled and which calculator (if any) is to be used in the configuration of said container. So why not use the container to provide us with the factory instead of new’ing it up ourselves?

I took the liberty of streamlining Jimmy’s test a bit more by letting AutoFixture create the customer as well. The ContainerDataAttribute is derived from AutoFixture’s AutoDataAttribute. My sample uses Unity instead of StructureMap but it works all the same. In contradiction to Jimmy’s statement (around 45:46 in the video) Unity does support the creation child containers. Although I have to admit that this is the first scenario where this feature makes sense to me. But that is a different story I’ll save for another day.

[Theory, ContainerData]
public void DummyTest(OrderFactory factory, Customer customer)
{
  Order order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

Wow. That’s what I call straight to the point. You don’t care about the creation process of your system-under-test (the factory) or your test data (the customer). You just care about the behavior which is to set the correct tax rate on your order object.
As the tax rate differs from one state to another it might not make sense to test for that specific value without controlling its calculation to some extent (e.g. via a mock object…) but that’s a common problem with HelloWorld!-samples and does not diminish the brilliancy of the general concept.

So let’s see how the ContainerDataAttribute is implemented.

public class ContainerDataAttribute : AutoDataAttribute
{
  public ContainerDataAttribute()
    : base(new Fixture().Customize(
      new ContainerCustomization(
        new UnityContainer().AddExtension(
          new ContainerConfiguration()))))
  {
  }
}

We derive from AutoFixture’s AutoDataAttribute which does the heavy lifting for us (like integrating with xUnit, creating random test data, …). We call the base class’ constructor handing it a customized Fixture object (that is AutoFixture’s lynchpin for creating test data). The customization receives a preconfigured UnityContainer instance as a parameter. As you might have guessed UnityContainer is the Unity DI container. Unity’s configuration system is not as advanced as that of most other containers. Especially it does not bring a dedicated way to package configuration data (like StructureMap’s Registry for example). But you can (ab)use the UnityContainerExtension class to achieve the same result. Just place your configuration code inside your implementation of the abstract Initialize() method and add the extension to the container.

public class ContainerConfiguration : UnityContainerExtension
{
  protected override void Initialize()
  {
    this.Container.RegisterType&lt;IFoo, Foo&gt;();
    this.Container.RegisterType&lt;ITaxCalculator, DefaultTaxCalculator&gt;();
  }
}

It makes sense to re-use as much of your production configuration as possible. But you should consider to modify it in places where the tests might interfere with actual production systems (like sending emails, modifying production databases etc.).

The customization to AutoFixture hooks up two last-chance handlers for creating objects by adding them to IFixture.ResidueCollectors.

public class ContainerCustomization : ICustomization
{
  private readonly IUnityContainer container;
  public ContainerCustomization(IUnityContainer container)
  {
    this.container = container;
  }
  public void Customize(IFixture fixture)
  {
    fixture.ResidueCollectors.Add(new ChildContainerSpecimenBuilder(this.container));
    fixture.ResidueCollectors.Add(new ContainerSpecimenBuilder(this.container));
  }
}

AutoFixture calls these object creators SpecimenBuilders. The first one we hook up is responsible for creating a child container if a test requires an IUnityContainer as a method parameter. The second actually uses the container to create objects.

public class ChildContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ChildContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null || type != typeof(IUnityContainer))
    {
      return new NoSpecimen();
    }
    return this.container.CreateChildContainer();
  }
}
public class ContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null)
    {
      return new NoSpecimen();
    }
    return this.container.Resolve(type);
  }
}

The NoSpecimen class is AutoFixture’s way of telling its kernel that this builder can’t construct an object for the current request. Something like a NullObject.

Well and that’s it. A couple of dozen lines of code. A superior testing framework (xUnit). A convenient way of creating test data (AutoFixture). A DI container (which should be mandatory for any project if you ask me…). And writing maintainable tests for the correct behavior of your code becomes a piece of cake. I love it! 🙂

You can find the sample code on my playground on CodePlex (solution TecX.Playground, project TecX.BehavioralTesting).

Using Expressions in WCF

Most business applications deal with showing, analyzing and modifying some kind of business relevant data. Although it is not a rare thing that customers ask you to “load everything” into a grid and let them work the data like they “always did in Excel” this is rarely a good idea. As soon as you have more than “Hello World!” numbers of records in your persistent storage, loading that data will take time, network bandwidth, CPU cycles for (de)serialization and disk I/O etc. pp.

A much better approach is to limit the number of records to be returned as early as possible by filtering out those records that don’t match your criteria. In a client/server application that means doing the filtering on the server. If you use WCF for the communication between client and server there is a slight glitch you need to work around: How do you get the filtering criteria to the server?

Out of the box you can’t transmit the Expression trees that represent your queries over the wire. They are not serializable. Using strings is tedious and prone to typos. In addition you will have to parse your query string unless you use some vendor specific format like Hibernate’s HQL which adds vulnerabilities like “xQL injection” to the mix. Creating server side methods for each filter you can think of is inflexible (i.e. users can’t create dynamic queries) and will result in a lot of code for something that is as easy as a one-line expression would be. Using some implementation of the Specification pattern would also require you to write quite some amount of code for the different search criteria which would only duplicate a subset of the features that Expressions offer. In addition you would add yet another syntax that your co-developers need to master. So unless you don’t mind to write, test, fix, document, update and maintain a lot of code you are back to square one.

Serialize.Linq to the rescue

The problem of Expressions not being serializable is as old as LINQ itself. This article on MSDN dates as far back as 2008. The author uses XML to represent the original query. Another article in a recent edition of a German .NET magazine uses a custom IQueryProvider that performs the translation between Expression tree and XML. Both implementations work to a certain degree but I have to admit I like neither one. My personal opinion on XML: It’s one huge PITA to work with. Sorry for the language. So I try to avoid having to deal with it whenever possible.

Serialize.Linq works a bit different: It maps each node of the original expression tree to a node that the DataContractSerializer can easily digest (I know that means XML somewhere under the covers, but the chances that I will ever see it or have to deal with it are close to zero). In a WCF scenario, this structure is then serialized, transmitted over the wire, deserialized into Serialize.Linq’s custom node format and from there back to a real Expression tree. You can find some articles that describe features of the framework on Sascha Kiefer’s blog.

This article talks about using Serialize.Linq together with WCF. It’s easy and requires almost zero work on your side. You just have to do the conversion between Expressions and SL’s ExpressionNodes whenever you query for data. Over and over again. Hm… Wouldn’t it be a lot better if I could save that effort as well?

I think I got the idea from various posts on Oren Eini’s blog: If you have repetitive tasks that are always the same put them into your infrastructure to relieve the burden on the developers a little. Well, considering that my infrastructure in this case is WCF which has about a bazillion extension points and that I have some experience with messing with WCF’s serialization behavior from my experiments with Enumeration classes I thought I would give it a shot.

Yet another IDataContractSurrogate

I found out that I can reuse most of the code I wrote for the custom serialization of Enumeration classes. By replacing Enumerations with Expressions I got it working in almost no time on the server side. The code snippets ignore methods that are not implemented for brevity.

public class SerializeExpressionsBehavior : Attribute, IServiceBehavior, IContractBehavior, IWsdlExportExtension
{
  private readonly ExpressionDataContractSurrogate surrogate;
  public SerializeExpressionsBehavior()
  {
    this.surrogate = new ExpressionDataContractSurrogate();
  }
  void IServiceBehavior.ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase host)
  {
    foreach (ServiceEndpoint endpoint in serviceDescription.Endpoints)
    {
      foreach (OperationDescription operation in endpoint.Contract.Operations)
      {
        DataContractSerializerOperationBehavior behavior = operation.Behaviors.Find();
        if (behavior == null)
        {
          behavior = new DataContractSerializerOperationBehavior(operation) { DataContractSurrogate = this.surrogate };
          operation.Behaviors.Add(behavior);
        }
        else
        {
          behavior.DataContractSurrogate = this.surrogate;
        }
      }
    }
    this.HideEnumerationClassesFromMetadata(host);
  }
  private void HideEnumerationClassesFromMetadata(ServiceHostBase host)
  {
    ServiceMetadataBehavior smb = host.Description.Behaviors.Find();
    if (smb == null)
    {
      return;
    }
    WsdlExporter exporter = smb.MetadataExporter as WsdlExporter;
    if (exporter == null)
    {
      return;
    }
    object dataContractExporter;
    XsdDataContractExporter xsdInventoryExporter;
    if (!exporter.State.TryGetValue(typeof(XsdDataContractExporter), out dataContractExporter))
    {
      xsdInventoryExporter = new XsdDataContractExporter(exporter.GeneratedXmlSchemas);
    }
    else
    {
      xsdInventoryExporter = (XsdDataContractExporter)dataContractExporter;
    }
    exporter.State.Add(typeof(XsdDataContractExporter), xsdInventoryExporter);
    if (xsdInventoryExporter.Options == null)
    {
      xsdInventoryExporter.Options = new ExportOptions();
    }
    xsdInventoryExporter.Options.DataContractSurrogate = this.surrogate;
  }
}

And the DataContractSurrogate that performs the actual translation between Expressions and ExpressionNodes.

public class ExpressionDataContractSurrogate : IDataContractSurrogate
{
  public Type GetDataContractType(Type type)
  {
    if (typeof(Expression).IsAssignableFrom(type))
    {
      return typeof(ExpressionNode);
    }
    return type;
  }
  public object GetObjectToSerialize(object obj, Type targetType)
  {
    Expression expression = obj as Expression;
    if (expression != null)
    {
      return expression.ToExpressionNode();
    }
    return obj;
  }
  public object GetDeserializedObject(object obj, Type targetType)
  {
    if (typeof(Expression).IsAssignableFrom(targetType))
    {
      return ((ExpressionNode)obj).ToExpression();
    }
    return obj;
  }
}

As I wanted to use Expressions on the client side as well I needed a new EndpointBehavior to hook up the surrogate. After that … it just worked 🙂

public class ClientSideSerializeExpressionsBehavior : IEndpointBehavior
{
  private IDataContractSurrogate surrogate;
  public ClientSideSerializeExpressionsBehavior()
  {
    this.surrogate = new ExpressionDataContractSurrogate();
  }
  public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
  {
    foreach (OperationDescription operation in endpoint.Contract.Operations)
    {
      DataContractSerializerOperationBehavior behavior = operation.Behaviors.Find();
      if (behavior == null)
      {
        behavior = new DataContractSerializerOperationBehavior(operation) { DataContractSurrogate = this.surrogate };
        operation.Behaviors.Add(behavior);
      }
      else
      {
        behavior.DataContractSurrogate = this.surrogate;
      }
    }
  }
}

So far I tested my solution with proxies derived from ClientBase<T>

using (var proxy = new MyServiceClient(new BasicHttpBinding(), new EndpointAddress(ServiceAddress)))
{
  Customer[] customers = proxy.QueryCustomers(c => c.Id > 3);
  // ...
}

(the client side behavior is added in the proxies constructor)

public MyServiceClient(Binding binding, EndpointAddress address)
  : base(binding, address)
{
  this.Endpoint.Behaviors.Add(new ClientSideSerializeExpressionsBehavior());
}

and those generated on-the-fly by a ChannelFactory<T>. Works like a charm 🙂

var factory = new ChannelFactory(new BasicHttpBinding(), ServiceAddress);
factory.Endpoint.Behaviors.Add(new ClientSideSerializeExpressionsBehavior());
var proxy = factory.CreateChannel();
Customer[] customers = proxy.QueryCustomers(c => c.Id > 3);

What you get and what’s missing

What you get is this beauty:

var customers = proxy.QueryCustomers(c => c.Id > 3);

Neat and simple.

With Serialize.Linq and the small additions to the WCF infrastructure outlined above you can use Expressions when querying a WCF service without breaking your workflow by manually translating queries to another format. Just use the same queries that you would use for any other data source. Done.

I have yet to evaluate wether the above code works with clients generated using “Add service reference”. But that will be the topic of another article. What’s definitely not working is seamlessly using Expressions in Silverlight clients. DataContractSurrogates are among the features that MS decided not to support in Silverlight. *sigh*

As usual you can grab the code from my page on CodePlex (projects TecX.Expressions and TecX.Expressions.Test). Enjoy! 🙂

New Release: Enterprise Library 6.0 & Unity 3.0

Patterns & practices (p&p) just released a new version of the Enterprise Library and the Unity Dependency Injection Container.

Grigori’s release notes can be found here.

The binaries and source code for EntLib can be downloaded from MSDN. Those for Unity are quite well hidden for some reason… Grab them here.

Book Discount Kata

Long time no see. About two months without anything interesting (related to dev topics at least) happening.

Recently I had a look at some of the katas at Coding Dojo. Quite interesting stuff. Today I want to present my shot at the Harry Potter book discount kata.

First things first: I used xUnit and especially xUnit’s data theories for TDD’ing my solution. I took a leave out of the kata’s book and used simple integer arrays to represent the shopping basket. I started with the simplest possible implementation. One class (DiscountCalculator) with a single method (Calculate(int[])). But well … that didn’t get me too far. Basically it solved the problem up to “we have two different books, how much is that?” before I decided that I hate the resulting code.

So I leaned back and thought about the problem a bit more. What I needed was a way to find subsets of the books inside the shopping basket that would maximize the discount. Some kind of partitioning algorithm. After a little back and forth I chose to implement that algorithm as a combination of three simple steps:

  1. If the basket is empty, you are done and no more partitioning is needed.
  2. If you have 8 books left in the basket and you can form two partitions with 4 distinct books each, you should prefer that to a 5/3 partition.
  3. Take as many distinct books as possible from the basket.

This is the code for those three steps:

public class EmptyBasket : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    if (context.Basket.Length == 0)
    {
      context.Finished = true;
    }
  }
}
public class Prefer44To53 : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    if (context.Basket.Length == 8)
    {
      List<int> basketCopy = new List<int>(context.Basket);
      int[] part1 = context.Basket.Distinct().Take(4).ToArray();
      if (part1.Length == 4)
      {
        foreach (int book in part1)
        {
	      basketCopy.Remove(book);
        }
        int[] part2 = basketCopy.Distinct().ToArray();
        if (part2.Length == 4)
        {
          context.MakePartition(part1);
          context.MakePartition(part2);
        }
      }
    }
  }
}
public class GreedyGrabDistinctBooks : IBasketPartitioner
{
  public void Partition(PartitioningContext context)
  {
    int[] differentBooks = context.Basket.Distinct().ToArray();
    if (differentBooks.Length > 0)
    {
      context.MakePartition(differentBooks);
    }
  }
}

Admittedly the implementation of the 4/4 rule could use some polishing. But it works for now.

To host those steps I used a variation of the chain-of-responsibility pattern. This chain would loop through the different steps. Each step would take some of the books from the basket and put them in a list of partitions until there are no books left. The order of the steps is important! To achieve the desired outcome of the “prefer 4/4 partition to 5/3 partition” rule you need to take those books from the basket before the greedy “take as many distinct books as possible” rule applies. I chose to remove both 4/4 chunks from the basket in step 2 to reduce the overhead of the calls to Distinct().

public class PartitionerChain
{
  private readonly List<IBasketPartitioner> partitioners;
  public PartitionerChain(params IBasketPartitioner[] partitioners)
  {
    this.partitioners = new List<IBasketPartitioner>(partitioners);
  }
  public IEnumerable<int[]> GetPartitions(int[] originalBasket)
  {
    var context = new PartitioningContext(originalBasket);
    int index = 0;
    do
    {
      this.partitioners[index].Partition(context);
      index = (index + 1) % this.partitioners.Count;
    }
    while (!context.Finished && context.Basket.Length > 0);
    return context.Partitions;
  }
}

The chain hands a context from step to step which contains the current content of the shopping basket, a list of partitions and a flag that indicates when the partitioning process is finished.

public class PartitioningContext
{
  private readonly List<int[]> basketPartitions;
  public PartitioningContext(int[] originalBasket)
  {
    this.Basket = originalBasket;
    this.basketPartitions = new List<int[]>();
  }
  public int[] Basket { get; private set; }
  public bool Finished { get; set; }
  public IEnumerable<int[]> Partitions { get { return this.basketPartitions; } }
  public void MakePartition(int[] partition)
  {
    this.basketPartitions.Add(partition);
    List<int> newBasket = new List<int>(this.Basket);
    foreach (int book in partition)
    {
      newBasket.Remove(book);
    }
    this.Basket = newBasket.ToArray();
  }
}

To calculate the actual price of the books I switched from the switch-case solution to (yet again) a chain-of-responsibility based one.

public abstract class DiscountStrategy
{
  public DiscountStrategy Next { get; protected set; }
  public abstract double GetPrice(int[] basket);
}
public class NoDiscount : DiscountStrategy
{
  public override double GetPrice(int[] basket)
  {
    return basket.Sum(book => 8.0);
  }
}
public class TwoBooks : DiscountStrategy
{
  public TwoBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 2)
    {
      return 2 * 8 * 0.95;
    }
    return this.Next.GetPrice(basket);
  }
}
public class ThreeBooks : DiscountStrategy
{
  public ThreeBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 3)
    {
      return 3 * 8 * 0.9;
    }
    return this.Next.GetPrice(basket);
  }
}
public class FourBooks : DiscountStrategy
{
  public FourBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 4)
    {
      return 4 * 8 * 0.8;
    }
    return this.Next.GetPrice(basket);
  }
}    
public class FiveBooks : DiscountStrategy
{
  public FiveBooks(DiscountStrategy next)
  {
    this.Next = next;
  }
  public override double GetPrice(int[] basket)
  {
    if (basket.Length == 5)
    {
      return 5 * 8 * 0.75;
    }
    return this.Next.GetPrice(basket);
  }
}

[OT] Did I mention that I LOVE the chain-of-responsibility pattern? It is super flexible. It allows for clear separation of concerns. Favors small, easy to understand (and test) classes. Changing the behavior of your solution becomes a simple matter of reordering steps that you have already implemented. [/OT]

By this you can easily swap out different discount rules.

After that the calculator was a rather dumb shell. It assembles the two chains in its constructor. This can be seen as a violation of the D(ependency Inversion) of SOLID software development. I chose to encapsulate the knowledge of how to order the different pieces in the chains there nonetheless. If I ever need to make that step configurable, it would be a no-brainer as the assignment already happens in the calculator’s constructor.

All the calculator has to do now is to let the partioners divide the shopping basket into handy pieces and then let the discount strategies calculate the price for the individual chunks. Sweet!

public class DiscountCalculator
{
  private readonly PartitionerChain partitioners;
  private readonly DiscountStrategy discounts;
  public DiscountCalculator()
  {
    this.partitioners = new PartitionerChain(new EmptyBasket(), new Prefer44To53(), new GreedyGrabDistinctBooks());
    this.discounts = new FiveBooks(new FourBooks(new ThreeBooks(new TwoBooks(new NoDiscount()))));
  }
  public double Calculate(int[] basket)
  {
    var partitions = this.partitioners.GetPartitions(basket);
    double total = partitions.Sum(partition => this.discounts.GetPrice(partition));
    return total;
  }
}

One more word to the testing aspect. I mentioned that I used xUnit data theories. With these it was almost effortless to use the test cases described at the bottom of the kata.

[Theory]
[InlineData(0d, new int[0])]
[InlineData(8d, new[] { 0 })]

// ...

[InlineData(2 * 8 * 4 * 0.8, new[] { 0, 0, 1, 1, 2, 2, 3, 4 })]
[InlineData(3 * 8 * 5 * 0.75 + 2 * 8 * 4 * 0.8, new[]
                                                {
                                                  0, 0, 0, 0, 0,
                                                  1, 1, 1, 1,1,
                                                  2, 2, 2, 2,
                                                  3, 3, 3, 3, 3,
                                                  4, 4, 4, 4
})]
public void CalculatesCorrectPrice(double expected, int[] basket)
{
  DiscountCalculator sut = new DiscountCalculator();
  Assert.Equal(expected, sut.Calculate(basket));
}

It was never ever that easy to setup tests that use different data but are equivalent otherwise. If you are interested in data theories and how they can make your life as a tester so much easier I strongly recommend that you have a look at Mark Seemann’s awesome series of posts about his implementation of the String Calculator kata. Mind blowing!

So that’s it for now. I think I will have a look at the other katas. Hope they are as much fun 🙂

Turn back time

Have you ever wished that you were able to go back in time? At least in software development that is relatively easy.

If you ask someone how you could simulate the passing of time for a unit test you might get an answer that involves TypeMock Isolator or something even more wicked. But you can solve that problem with less than a hundred lines of code, once and for all.

In his book Dependency Injection in .NET Mark Seemann introduced a small helper class he calls TimeProvider. It is used as a sample of an Ambient Context and like the DateTime structure it offers some static properties to access the current local time or the current UTC time.

In a post on his blog Seemann shows how descriptive testing with the TimeProvider API can become.

Since I first read about the TimeProvider I used it countless times and it made my life a lot (!!!) easier. I made a few optimizations to reduce the friction of the original code.

public class TimeProvider
{
  private static TimeProvider current;
  static TimeProvider()
  {
    current = new DefaultTimeProvider();
  }
  public static TimeProvider
  {
    get { return current; }
    set
    {
      Guard.AssertNotNull(value, "Current");
      current = value;
    }
  }
  public static DateTime Now
  {
    get { return Current.GetNow(); }
  }
  public static DateTime UtcNow
  {
    get { return Current.GetUtcNow(); }
  }
  protected abstract DateTime GetNow();
  protected abstract DateTime GetUtcNow();
}

This is the default implementation:

public class DefaultTimeProvider : TimeProvider
{
  protected override GetNow()
  {
    return DateTime.Now;
  }
  protected override GetUtcNow()
  {
    return DateTime.UtcNow;
  }
}

And because Moq can’t mock protected methods I wrote an almost as simple implementation for unit testing

public class MockTimeProvider : TimeProvider
{
  private readonly DateTime now;
  private readonly DateTime utcNow;
  public MockTimeProvider(DateTime now)
    : this(now, now)
  {
  }
  public MockTimeProvider(DateTime now, DateTime utcNow)
  {
    this.now = now;
    this.utcNow = utcNow;
  }
  protected override GetNow()
  {
    return this.now;
  }
  protected override GetUtcNow()
  {
    return this.utcNow;
  }
}

I had to change the implementation of the Freeze method from the sample a bit but nothing to worry:

internal static void Freeze(this DateTime dt)
{
    var timeProviderStub = new MockTimeProvider(dt);
    TimeProvider.Current = timeProviderStub;
}

It won’t help you in legacy systems* or when dealing with 3rd party components. But it really helps when testing your own code!

* Unless you replace all calls to DateTime.Now or UtcNow which we did on a project I worked with last year.

Enterprise Library 6.0 – Semantic Logging Application Block CTP

The first bits of EntLib6 are out! Welcome the Semantic Logging Application Block!

It’s based on Event Tracing for Windows (ETW, which is baked into the Windows OS) but additionally offers support for classic logger targets like flat files, databases etc. It provides a structured, extensible log format for all of your log messages which makes processing and searching your log much more convenient.

Read more on the background of semantic logging or directly get some code and start experimenting with the CTP.

Microsoft p&p Advisory Board Member

Last week Grigori Melnik, Principal Program Manager at Microsoft patterns & practices and responsible for the development of the Microsoft Enterprise Library and Unity Dependency Injection Container, asked wether I would like to join the Advisory Board for EntLib6/Unity3.

I happily accepted the invitation 🙂

I’m looking forward to participating in the development of a great tool set and learning how p&p works! Thanks for the chance to give something back to you guys!

 

 

 

2012 in review

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

The new Boeing 787 Dreamliner can carry about 250 passengers. This blog was viewed about 1,200 times in 2012. If it were a Dreamliner, it would take about 5 trips to carry that many people.

Click here to see the complete report.