Turn back time

Have you ever wished that you were able to go back in time? At least in software development that is relatively easy.

If you ask someone how you could simulate the passing of time for a unit test you might get an answer that involves TypeMock Isolator or something even more wicked. But you can solve that problem with less than a hundred lines of code, once and for all.

In his book Dependency Injection in .NET Mark Seemann introduced a small helper class he calls TimeProvider. It is used as a sample of an Ambient Context and like the DateTime structure it offers some static properties to access the current local time or the current UTC time.

In a post on his blog Seemann shows how descriptive testing with the TimeProvider API can become.

Since I first read about the TimeProvider I used it countless times and it made my life a lot (!!!) easier. I made a few optimizations to reduce the friction of the original code.

public class TimeProvider
{
  private static TimeProvider current;
  static TimeProvider()
  {
    current = new DefaultTimeProvider();
  }
  public static TimeProvider
  {
    get { return current; }
    set
    {
      Guard.AssertNotNull(value, "Current");
      current = value;
    }
  }
  public static DateTime Now
  {
    get { return Current.GetNow(); }
  }
  public static DateTime UtcNow
  {
    get { return Current.GetUtcNow(); }
  }
  protected abstract DateTime GetNow();
  protected abstract DateTime GetUtcNow();
}

This is the default implementation:

public class DefaultTimeProvider : TimeProvider
{
  protected override GetNow()
  {
    return DateTime.Now;
  }
  protected override GetUtcNow()
  {
    return DateTime.UtcNow;
  }
}

And because Moq can’t mock protected methods I wrote an almost as simple implementation for unit testing

public class MockTimeProvider : TimeProvider
{
  private readonly DateTime now;
  private readonly DateTime utcNow;
  public MockTimeProvider(DateTime now)
    : this(now, now)
  {
  }
  public MockTimeProvider(DateTime now, DateTime utcNow)
  {
    this.now = now;
    this.utcNow = utcNow;
  }
  protected override GetNow()
  {
    return this.now;
  }
  protected override GetUtcNow()
  {
    return this.utcNow;
  }
}

I had to change the implementation of the Freeze method from the sample a bit but nothing to worry:

internal static void Freeze(this DateTime dt)
{
    var timeProviderStub = new MockTimeProvider(dt);
    TimeProvider.Current = timeProviderStub;
}

It won’t help you in legacy systems* or when dealing with 3rd party components. But it really helps when testing your own code!

* Unless you replace all calls to DateTime.Now or UtcNow which we did on a project I worked with last year.

Advertisements

Pipes and Filters

Pipes and filters (or just pipeline) is another common pattern. Oren Eini and Jeremy Likness both have very interesting posts about it on their respective blogs.

While Jeremy’s post aims at the slick creation of pipelines, Oren talks more about the pattern itself and how to implement it in a specific manner (using input and output values of Type IEnumerable<T>).

Some interesting argument came up in the comments to Oren’s post. “Why not use LINQ instead of your custom pipeline (framework)?”

I won’t repeat all the pros and cons here. Better read the posts yourself, they are worth your time!

My opinion on the topic: LINQ is a great tool. But I believe it’s neither the only solution for chains of queries and transformations, nor is it the best in all cases.

I will pick up one point from the “pro LINQ” point of view. “With LINQ you can have different input and output types.”

Sure you can. But who says you can’t do that with pipes and filters just as easily?

We define an abstract base class Filter<TIn, TOut>

public abstract class Filter<TIn, TOut>
{
  public abstract IEnumerable<TOut> Process(IEnumerable<TIn> input);
  public Filter<TIn, TNext> Pipe<TNext>(Filter<TOut, TNext> next)
  {
    return new Pipe<TIn, TOut, TNext>(this, next);
  }
}

and a derived class Pipe<TIn, T, TOut>

public class Pipe<TIn, T, TOut> : Filter<TIn, TOut>
{
  private readonly Filter<TIn, T> source;
  private readonly Filter<T, TOut> destination;
  public Pipe(Filter<TIn, T> source, Filter<T, TOut> destination)
  {
    this.source = source;
    this.destination = destination;
  }
  public override sealed IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    var x = this.source.Process(input);
    var result = this.destination.Process(x);
    return result;
  }
}

With these two as a base we can easily chain filters with different input and output types.

We can also fine-tune the ends of the pipeline a bit so that the code is a nicer read. With a small extension method and a just as small dummy class we can use any enumerable as the starting point for defining a pipeline.

public static Filter<TIn, TOut> Pipe<TIn, TOut>(this IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
{
  return new PipelineStartDummy<TIn, TOut>(enumerable, filter);
}
private class PipelineStartDummy<TIn, TOut> : Filter<TIn, TOut>
{
  private readonly IEnumerable<TIn> enumerable;
  private readonly Filter<TIn, TOut> filter;
  public PipelineStartDummy(IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
  {
    this.enumerable = enumerable;
    this.filter = filter;
  }
  public override IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    return this.filter.Process(this.enumerable);
  }
}

If we don’t care what comes out of the pipeline and just want to start processing values from the source we can use another extension method that encapsulates Oren’s enumerator magic.

public static void Start<TIn, TOut>(this Filter<TIn, TOut> filter)
{
  var enumerable = filter.Process(null);
  var enumerator = enumerable.GetEnumerator();
  while (enumerator.MoveNext()) { }
}

And now we put it all together and get the following:

new Numbers(3).Pipe(new Square()).Pipe(new Printer()).Start();

Numbers just returns integer values between 1 and the constructor parameter. Square squares all input values and the Printer writes the input to the console. The call to Start() starts the processing.

While it is not the most impressive example implementation of the pipes and filters pattern I believe that it demonstrates how powerful and flexible the pattern can be. And the code is still readable and very explicit about what you are doing. With just a few lines of code you have a composable, easy to understand solution where you can recombine filters in different orders to change the behavior of the pipeline. And you can do just about anything inside of those filters (think validation or enriching the values traversing through the pipeline with data from services or persistent storage). You can even change the type of the values you are processing between steps. This is yet another case of “Like it a lot!”

Testing and databases

I prefer unit tests over integration tests any time. But at some point you just can’t avoid the latter. You need your components to hit a database to verify that your queries are correct and that you update the right records.

Some databases (like RavenDB for example, see Ayende’s answer) support running completely in-memory specifically for testing scenarios. If you are in the convenient situation to use one of them you can spin up a db instance, fill it with data, run your tests and throw the instance away. Otherwise you have to find another way to prepare your materialized database to offer a well defined set of records.

There are at least two ways to do that: Have a designated test database. Run each test inside a transaction and rollback after the test. Your database should be returned to the original state it was in before the test. Or you can drop the database after each test and recreate it from scratch with the data needed for each test.

I want to talk about the second approach. I wrote a small helper class for a task I’m currently working on. It allows you to define some setup steps (like picking a connection string or selecting the name of the database to create for the test) and a build-up sequence (drop existing db, create empty db, create empty tables etc.). And to make using it nice and easy it automatically drops the created database at the end of the test run.

[Fact]
public void Should_CreateAndDropDatabase()
{
  using (var result = Database.Build(
    x =>
      {
        x.WithConnectionStringNamed("MyConnectionString");
        x.WithDatabaseName("FooDatabase");
        x.BuildSequence(
          db =>
            {
              db.DropExistingDatabase();
              db.CreateEmptyDatabase();
              db.CreateTables();
              db.CreateTestData();
            });
      }))
  {
    Assert.Null(result.Error);

    // ... run your actual test code
  }
}

Running this test writes some basic information about what’s going on to the console. But you can easily use some other means for tracing.

Retrieving ConnectionString named ‘MyConnectionString’.
Using database name ‘FooDatabase’.
Droping database ‘FooDatabase’.
Creating empty database ‘FooDatabase’.
Creating tables.
Creating test data.
Droping database ‘FooDatabase’.

How is it working? We start from a static entry point (similar to TopShelf’s HostFactory):

public static class Database
{
  public static DatabaseBuildResult Build(Action<DatabaseBuildConfiguration> action)
  {
    Guard.AssertNotNull(action, "action");
    var config = new DatabaseBuildConfiguration();
    action(config);
    return config.Execute();
  }
}

From there we make calls to DatabaseBuildConfiguration

public class DatabaseBuildConfiguration
{
  private readonly AppendOnlyCollection<Action> actions;
  private bool dontDropDatabaseOnDispose;
  public DatabaseBuildConfiguration()
  {
    this.actions = new AppendOnlyCollection<Action>();
  }
  public IAppendOnlyCollection<Action> Actions { get { return this.actions; } }
  public string ConnectionString { get; private set; }
  public string Database { get; private set; }
  
  public DatabaseBuildResult Execute()
  {
    Action onDispose = this.dontDropDatabaseOnDispose ? () => { } : this.GetDropDatabaseAction();
    var result = new DatabaseBuildResult(onDispose);
    try
    {
      foreach (Action action in this.Actions)
      {
        action();
      }
    }
    catch (Exception ex)
    {
      result.Error = ex;
    }
    return result;
  }
  
  // ... more methods
  
  public void WithConnectionStringNamed(string connectionStringName)
  {
    Guard.AssertNotEmpty(connectionStringName, "connectionStringName");
    Action action = () =>
      {
        Console.WriteLine("Retrieving ConnectionString named '{0}'.", connectionStringName);
        this.ConnectionString =
          ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
      };
    this.Actions.Add(action);
  }
  public void BuildSequence(Action<DatabaseBuildSequenceConfiguration> action)
  {
    var config = new DatabaseBuildSequenceConfiguration(this);
    action(config);
  }
}

This class is basically a container for a list of actions. The methods on the class place actions in the list. Execute() adds some error handling and runs these actions.

DatabaseBuildSequenceConfiguration defines the actions used to manipulate the database. I won’t show it’s code here as it works in the same way as DatabaseBuildConfiguration.

In the sample at the beginning of this post you can see a using-block surrounding the actual test code. The DatabaseBuildResult implements IDisposable. By default disposing the result will also drop the entire database. But you can turn off that behavior by calling DontDropDatabaseOnDispose().

You can add more methods for e.g. running sql scripts to generate or manipulate data or hookup ready to load database files. The model is quite easy to extend.

It’s surely not as good as having a fully functional (and functionally equivalent!) in-memory database but for a reasonably complex database it makes my live a lot easier!

Specification Pattern

The Specification Pattern. Yet another one I find useful in daily business. But a few enhancements can make it even more useful and a lot nicer to handle.

1, 2, many!

The CompositeSpecification is most often implemented with two fields for spec1 and spec2 or left and right. The AndSpecification and OrSpecifications evaluate these fields with their respective operator.

public class Or<T> : CompositeSpecification<T>
{
  // ...

  public override bool IsSatisfiedBy(T candidate)
  {
    return spec1.IsSatisfiedBy(candidate) || spec2.IsSatisfiedBy(candidate);
  }
}

This can result in a degenerated tree structure when you link many specifications with the same operator (e.g. a.Or(b).Or(c)...Or(n)).

To avoid this I decided to change the behavior of the composite. Instead of two fields it uses a list of children. If you try to link two specifications the code checks wether either one of them is of the same type as the composite and adds the other composites children instead of the composite itself to the list.

Sounds more complicated than it is. Let’s see some code.

public abstract class CompositeSpecification<T> : Specification<T>
{
  private readonly List<Specification<T>> children;
  public CompositeSpecification()
  {
    this.children = new List<Specification<T>>();
  }
  public IEnumerable<Specification<T>> Children
  {
    get { return children; }
  }
  public int Count
  {
    get { return this.children.Count; }
  }
  protected void Add(Specification<T> specification)
  {
    this.children.Add(specification);
  }
  protected void AddRange(IEnumerable<Specification<T>> specifications)
  {
    this.children.AddRange(specifications);
  }
}

public class Or<T> : CompositeSpecification<T>
{
  public Or(Specification<T> specification, Specification<T> other)
  {
    this.Include(specification);
    this.Include(other);
  }
  public override string Description
  {
    get { return " || "; }
  }
  public override bool IsSatisfiedBy(T candidate)
  {
    foreach (Specification<T> specification in this.Children)
    {
      if (specification.IsSatisfiedBy(candidate))
      {
        return true;
      }
    }
    return false;
  }
  private void Include(Specification<T> specification)
  {
    Or<T> or = specification as Or<T>;
    if (or != null)
    {
      this.AddRange(or.Children);
    }
    else
    {
      this.Add(specification);
    }
  }
}

And this is how two specifications can be linked with an Or operator.

public abstract class Specification<T>
{
  // ...
  
  public abstract bool IsSatisfiedBy(T candidate);
  public Specification<T> Or(Specification<T> other)
  {
    return new Or<T>(this, other);
  }
}

In the end this will put all successive Or’s in a single list, which makes finding the right specification in the tree a lot easier.

What do we have?

Generating a human readable representation of the specification tree can be tedious but is often beneficial if you need to see “what you have”. The easiest way to traverse a tree structure is a Visitor. The same pattern is used by Microsoft in their expression trees.

We add an Accept method and a property for the description of the specification to the base classes

public abstract class Specification<T>
{
  // ...
  public abstract string Description { get; }
  public virtual void Accept(SpecificationVisitor<T> visitor)
  {
    visitor.Visit(this);
  }
}

public abstract class CompositeSpecification<T> : Specification<T>
{
  // ...
  public override void Accept(SpecificationVisitor<T> visitor)
  {
    visitor.Visit(this);
  }
}

public class Or<T> : CompositeSpecification<T>
{
  // ...
  public override string Description
  {
    get { return " || "; }
  }
}

Define a base class for the SpecificationVisitor

public abstract class SpecificationVisitor<T>
{
  public abstract void Visit(Specification<T> specification);
  public abstract void Visit(CompositeSpecification<T> composite);
}

And the implementation of a PrettyPrinter becomes as simple as that

public class PrettyPrinter<T> : SpecificationVisitor<T>
{
  private readonly StringBuilder sb;
  public PrettyPrinter()
  {
    this.sb = new StringBuilder(250);
  }
  public override void Visit(Specification<T> specification)
  {
    this.sb.Append(specification.Description);
  }
  public override void Visit(CompositeSpecification<T> composite)
  {
    this.sb.Append("(");
    foreach (Specification<T> child in composite.Children)
    {
      child.Accept(this);
      this.sb.Append(composite.Description);
    }
    int l = composite.Description.Length;
    this.sb.Remove(sb.Length - l, l);
    this.sb.Append(")");
  }
  public override string ToString()
  {
    return this.sb.ToString();
  }
}

And this gives you a nice and friendly printout of your graph

var spec = new AlwaysFalse().Or(new AlwaysTrue().And(new Odd())).Or(new AlwaysTrue());
var printer = new PrettyPrinter<T>();
spec.Accept(printer);
string friendly = printer.ToString(); // (false || (true && Odd) || true)

First time I tried to attach some zipped source code and found out that WordPress won’t let me… Get the source code here (project TecX.Common folder Specifications and the test suite that puts them to use in TecX.Common.Test).

Composition over Inheritance: Custom Collections

Composition over inheritance is a known but unfortunately not very well understood OOD principle.

All too often I see code like this:

public class MyFooList : List<Foo> { /* .. */ }

or this:

public class MySomething
{
  public List<Foo> Foos { get; set; }
}

The derived class adds a method or two. Maybe a property. And it carries all the weight of the super-class List<T>. The property of type List<Foo> is there for anybody to fill with content of unverified qualities. Why?!

List is a powerful class but it does not give you any control over the objects that are added to it. What if your custom collection needs to protect invariants like “You can only add Foos with a Bar > 7?” Or you can only append to your collection but not remove objects from it. There is no easy way to model this when you are inheriting from List<T> or providing raw lists as part of your API.

If you don’t need the majority of the 30-some methods and the ~10 properties why would you show them? They just blur your interface. A simple custom collection that only offers what your consumers need to know is simple enough to write. If you implement IEnumerable<T> you can even unleash all the power of LINQ to select items from your collection.

If you have many or very complex invariants encapsulate them in validators to protect your collection. Let the collection tell its consumers about the rules a Foo instance violates when they are trying to add it to the collection.

[TestMethod]
public void Should_NotifyOnViolatedInvariants()
{
  var collection = new MyFooCollection(new IFooValidator[] { new NotNullValidator(), new MinBarValidator(7) });
  
  IFooValidator violatedInvariant;
  Assert.IsFalse(collection.Add(null, out violatedInvariant));
  Assert.IsInstanceOfType(violatedInvariant, typeof(NotNullValidator));

  Assert.IsFalse(collection.Add(new Foo() { Bar = 1 }, out violatedInvariant));
  Assert.IsInstanceOfType(violatedInvariant, typeof(MinBarValidator));
  Assert.IsTrue(collection.Add(new Foo { Bar = 8 }, out violatedInvariant));
}

public class MyFooCollection : IEnumerable<Foo>
{
  private readonly IEnumerable<IFooValidator> validators;
  private readonly List<Foo> foos;
  public MyFooCollection(IEnumerable<IFooValidator> validators)
  {
    this.validators = validators ?? new IFooValidator[0];
    this.foos = new List<Foo>();
  }
  public bool Add(Foo foo, out IFooValidator violatedInvariant)
  {
    violatedInvariant = this.validators.FirstOrDefault(v => v.ViolatesInvariant(foo));
    if (violatedInvariant == null)
    {
      this.foos.Add(foo);
      return true;
    }
    return false;
  }
  public IEnumerator<Foo> GetEnumerator()
  {
    return this.foos.GetEnumerator();
  }
  IEnumerator IEnumerable.GetEnumerator()
  {
    return this.GetEnumerator();
  }
}
public interface IFooValidator
{
  string Description { get; }
  bool ViolatesInvariant(Foo foo);
}
public class MinBarValidator : IFooValidator
{
  private readonly int minBar;
  public MinBarValidator(int minBar)
  {
    this.minBar = minBar;
  }
  public string Description
  {
    get { return string.Format("foo.Bar must be greater than {0}", this.minBar); }
  }
  public bool ViolatesInvariant(Foo foo)
  {
    return foo.Bar <= this.minBar;
  }
}
public class NotNullValidator : IFooValidator
{
  public string Description
  {
    get { return "foo must not be null."; }
  }
  public bool ViolatesInvariant(Foo foo)
  {
    return foo == null;
  }
}
public class Foo
{
  public int Bar { get; set; }
}

You are still using the list under the covers. But your custom collection (just a couple of lines long!) is now stream-lined to only provide the functionality that is really needed. If you expose a raw List<T> anybody can modify the contents of that list. There is no way to enforce the collections invariants. With the custom collection your intentions became clearer and you made using your API safer. Isn’t that worth the bit of extra effort?

IntelliSense for custom ConfigurationSections

To be frank: I don’t like XML for configuration purposes. I think it’s a plague and nothing less. It is verbose and prone to typos. It does not give you any type safety. Thus I wage a constant war against XML config files in my projects.
But sometimes you can’t avoid XML for one reason or another. And if you can’t avoid it you have to cope with its shortcomings. Configuration in code gives you (among many other things I value) IntelliSense support. You get help with possible values for variables, properties or parameters, method names and much more. To get (some of) the same convenience for config files you have to provide xsd schema files. Established frameworks sometimes ship with those schemas. But if you write custom config sections you are completely on your own.

Rob Seder has a great post on writing custom configuration sections. The last paragraph explains how you can create xsd’s for your components.

While playing with Enumeration Classes I also wanted to see how I can use them with config files. So I wrote a custom config section for evaluation purposes.

public class MyConfigSection : ConfigurationSection
{
  private readonly ConfigurationProperty myEnum;
  public MyConfigSection()
  {
    this.myEnum = new ConfigurationProperty(
      "myEnum", 
      typeof(Numbers),
      Numbers.None,
      new EnumClassConverter<Numbers>(),
      null,
      ConfigurationPropertyOptions.IsRequired);
    this.Properties.Add(this.myEnum);
  }
  public Numbers MyEnum
  {
    get { return (Numbers)base[this.myEnum]; }
    set { base[this.myEnum] = value; }
  }
}

Numbers is an enumeration class from a prior article.

[side note] I love the way Microsoft’s engineers allow you to control how the configuration system handles your custom classes by letting you specify a TypeConverter [/side note]

I followed the steps outlined in Rob’s post and created a schema using the XML –> Create Schema option from Visual Studio. I stripped away all the fluff that belonged to the config file schema until all that was left was the part for my custom section. I then added comments and allowed values for Numbers along with some more comments. I ended up with the following xsd.

<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" 
           targetNamespace="http://tecx.codeplex.com/tecx/2012/enum"
           attributeFormDefault="unqualified"
           elementFormDefault="qualified">
  <xs:element name="mySection">
		<xs:annotation>
			<xs:documentation>This demonstrates IntelliSense for config files</xs:documentation>
		</xs:annotation>
    <xs:complexType>
      <xs:attribute name="myEnum" use="required" >
        <xs:annotation>
          <xs:documentation>Predefined set of numbers</xs:documentation>
        </xs:annotation>
        <xs:simpleType>
          <xs:restriction base="xs:string">
            <xs:enumeration value="None">
              <xs:annotation>
                <xs:documentation>NaN</xs:documentation>
              </xs:annotation>
            </xs:enumeration>
            <xs:enumeration value="One">
              <xs:annotation>
                <xs:documentation>Number 1.</xs:documentation>
              </xs:annotation>
            </xs:enumeration>
            <xs:enumeration value="Two">
              <xs:annotation>
                <xs:documentation>Number 2.</xs:documentation>
              </xs:annotation>
            </xs:enumeration>
            <xs:enumeration value="Four">
              <xs:annotation>
                <xs:documentation>Number 4.</xs:documentation>
              </xs:annotation>
            </xs:enumeration>
          </xs:restriction>
        </xs:simpleType>
      </xs:attribute>
    </xs:complexType>
  </xs:element>
</xs:schema>

That’s a lot of xml for one single property but there you are… One important thing to notice is the targetNamespace attribute in the schema element. This will be used to reference the schema in your config file.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="mySection" type="TecX.EnumClasses.Test.MyConfigSection, TecX.EnumClasses.Test"/>
  </configSections>
  <mySection myEnum="Four" xmlns="http://tecx.codeplex.com/tecx/2012/enum" />
</configuration>

The section contains a reference to the targetNamespace of your custom schema. Now you need to add the xsd to the list of schemas for your config file. VS did that automatically for me when I placed the schema file in the same folder as the config file. If your’s doesn’t you can add the schema manually. Right click in the editor window for your app.config, select Properties from the context menu and the properties view will show up. It lists a setting called Schemas where the path to the DotNetConfig.xsd will already be listed. Click the […] button and add your custom schema using the dialog that pops up.

IntelliSense for custom configuration sections

And finally Visual Studio will give you some help with the tedious task of handling configuration through XML.

On top of writing the code for the configuration section you have to put some effort into writing the schema. I would not make that investment until the API is mostly stable for the current release. But then it definitely makes sense and developers that use your components will love you for that. Its the kind of documentation that does not force them to leave their working environment, lookup something in a .chm help file or a wiki page but instead they can stay focused on their current task.