Quartz.NET meets Design Patterns

This is the third in a series of posts.

In the last post I showed you how I set up some tests for my implementation of retries with Quartz.NET. I repeatedly hinted at some neat tricks to make things more convenient so here they are.

Quartz.NET requires that your jobs only throw JobExecutionExceptions (as explained at the very bottom of this page). There are reasons why this restriction makes sense but I don’t want to litter my business logic with repetitions of the exact same exception handling code. I think that’s what DRY is all about. I don’t want to force all of my jobs to inherit from a specific base class either. At least not for the single purpose of following Quartz.NET’s rules for exception handling.

But by applying a decorator to my job classes I can fix that once and for all.

public class EnsureJobExecutionExceptionDecorator : IJob
{
  private readonly IJob inner;
  public EnsureJobExecutionExceptionDecorator(IJob inner)
  {
    this.inner = inner;
  }
  public void Execute(IJobExecutionContext context)
  {
    try
    {
      this.inner.Execute(context);
    }
    catch (JobExecutionException)
    {
      throw;
    }
    catch (Exception cause)
    {
      throw new JobExecutionException(cause);
    }
  }
}

JobExecutionExceptions are simply rethrown. Which allows you to throw them in your job if you have to tweak what to tell the scheduler. All other exceptions become InnerExceptions of a new JobExecutionException. Done. Now that was easy.

But how do I ensure that each time Quartz.NET instantiates a job the decorator is in place?

By replacing the scheduler’s default IJobFactory with something more advanced. For my playground I derived from the PropertySettingJobFactory base class and use Unity to create my jobs.

private sealed class UnityJobFactory : PropertySettingJobFactory
{
  private readonly IUnityContainer container;
  public UnityJobFactory(IUnityContainer container)
  {
    this.ThrowIfPropertyNotFound = false;
    this.WarnIfPropertyNotFound = true;
    this.container = container;
  }
  public override IJob NewJob(
    TriggerFiredBundle bundle,
    IScheduler scheduler)
  {
    Type jobType = bundle.JobDetail.JobType;
    IJob job = (IJob)this.container.Resolve(jobType);
    JobDataMap data = new JobDataMap();
    data.PutAll(scheduler.Context);
    data.PutAll(bundle.JobDetail.JobDataMap);
    data.PutAll(bundle.Trigger.JobDataMap);
    this.SetObjectProperties((object)job, data);
    return job;
  }
  public override void ReturnJob(IJob job)
  {
    this.container.Teardown(job);
  }
}

And then its a simple matter to configure Unity to wrap every job it creates with the EnsureJobExecutionExceptionDecorator. Not that hard is it?

And finally there is the code snippet that unfreezes my test thread when I’m done.

public class UnfreezeWhenJobShouldNotRunAgain : IRetryStrategy
{
  private readonly IRetryStrategy inner;
  private readonly ManualResetEvent reset;
  public UnfreezeWhenJobShouldNotRunAgain(
    IRetryStrategy inner,
    ManualResetEvent reset)
  {
    this.inner = inner;
    this.reset = reset;
  }
  public bool ShouldRetry(IJobExecutionContext context)
  {
    bool shouldRetry = this.inner.ShouldRetry(context);
    if (!shouldRetry)
    {
      this.reset.Set();
    }
    return shouldRetry;
  }
  public ITrigger GetTrigger(IJobExecutionContext context)
  {
    return this.inner.GetTrigger(context);
  }
}

Yet another decorator. Whenever the RetryJobListener from the first post of this series queries the IRetryStrategy wether a job should be run again the decorator checks for “yes” or “no”. And in case of a “no” it will set the ManualResetEvent and allow the test thread to continue.

So we have decorators, an abstract factory and dependency injection here. And all of that in less than 200 lines of code. All pieces short and to the point but by combining them you can build mighty powerful solutions that are still clean and easy to understand.

I hope you enjoyed the series and come back for another read. See you soon!

Advertisement

TypedFactory reloaded

Another piece of code that I wrote some time ago came back to my attention recently and I decided to give it the same treatment as the SmartConstructor and do some cleanup on the TypedFactory port I did for Unity.

One thing I really disliked at the time was that the Unity Interception Extension cannot generate a “proxy without target” out of the box. If you want to intercept calls to an object you actually need to have that object. A pipeline with no target isn’t possible with Unity.

As I had experimented with IL code generation before I took the long way round and built a NullObject generator. Not pretty. But it worked.

Another issue was the need to fumble an instance of the container into the interception pipeline by putting it in a dictionary, retrieving it down the pipeline, cast it back to a container and then start working with it. For the life of me I couldn’t find an architecturally clean way to solve that issue then. But the ugly one worked.

If you can’t see the forest for the trees you need to take a step back. A year later I took a look at a post that I had read back then and all of a sudden the pieces fell into place.

As an answer to a question in the Unity discussion forum Chris Tavares solved both of my problems. I just didn’t get it the first time.

The code snippet to generate a proxy without a target (or more precisely an interception pipeline without a target) is really simple.

Intercept.NewInstanceWithAdditionalInterfaces(
  Type type,
  ITypeInterceptor interceptor,
  IEnumerable<IInterceptionBehavior> interceptionBehaviors,
  IEnumerable<Type> additionalInterfaces,
  params object[] constructorParameters)

 

This method gives us all that we need. The type parameter lets you specify the type of the target object. As we don’t actually need a target typeof(object) is good enough. The VirtualMethodInterceptor from Chris’ answer is what we will use as interceptor. The factory interface we want to implement goes into the additionalInterfaces. As our target is a simple object we don’t need to bother about constructorParameters.

The actual factory logic is implemented as an IInterceptionBehavior called FactoryBehavior (I’m all for self-explanatory names by the way). But that behavior still needs an instance of the container which is just not available at the place this is all wired up.

Again, Chris’ post gave me a solution that I just didn’t recognize.

Anyway, to hook it up to the container, the quickest thing to do would be to use InjectionFactory to call the Intercept.NewInstance API.

InjectionFactory is derived from InjectionMember (as is TypedFactory). InjectionMembers are Unity’s way to make very specific additions to the build pipeline of a type. An InjectionFactory tells the  container how to construct an object instead of letting the container figure out how to do that itself.

InjectionFactory has two constructors that each take a delegate that lets you specify how to create your object of interest.

I’ll just show you the signature of the greedier one of the two.

public InjectionFactory(Func<IUnityContainer, Type, string, object> factoryFunc)

The first parameter is a container out of the depths of the Unity infrastructure. And its totally for free. Below you see the new implementation of the AddPolicies(Type, Type, string, IPolicyList) method of the TypedFactory.

public override void AddPolicies(Type ignore, Type factoryType, string name, IPolicyList policies)
{
  if (factoryType.IsInterface)
  {
    InjectionFactory injectionFactory = new InjectionFactory(
      (container, t, n) => Intercept.NewInstanceWithAdditionalInterfaces(
        typeof(object),
        new VirtualMethodInterceptor(),
        new IInterceptionBehavior[] { new FactoryBehavior(container, this.selector) },
        new[] { factoryType }));

    injectionFactory.AddPolicies(ignore, factoryType, name, policies);
  }
  else if (factoryType.IsAbstract)
  {
    InjectionFactory injectionFactory = new InjectionFactory(
      (container, t, n) => Intercept.NewInstance(
        factoryType,
        new VirtualMethodInterceptor(),
        new IInterceptionBehavior[] { new FactoryBehavior(container, this.selector) }));

    injectionFactory.AddPolicies(ignore, factoryType, name, policies);
  }
  else
  {
    throw new ArgumentException("'factoryType' must either be an interface or an abstract class.", "factoryType");
  }
}

It supports interfaces and abstract classes as factoryType. Based on that distinction we either use the Intercept.NewInstanceWithAdditionalInterfaces or Intercept.NewInstance method to create our factory implementation. The container provided by the InjectionFactory gets forwarded into our FactoryBehavior and we are done. The rest of the implementation stays the same.

Now this is more like it. We reuse large parts of the Unity infrastructure instead of reinventing the wheel and quite a bit of code becomes obsolete. As usual you can find the code for the updated TypedFactory on my CodePlex site (project TecX.Unity[Factories] and the tests that show how it works under TecX.Unity.Test[Factories]).

Behavioral Testing

Jimmy Bogard recently gave a presentation at the NDCOslo about a testing strategy he calls Holistic Testing. Have a look at the video, it’s worth your time!

Among a lot of other things he talked about why he doesn’t use mocking frameworks (or hand-crafted mock objects) very often in his tests. According to Jimmy, testing with mocks tends to couple your tests to the implementation details of your code. Instead he prefers to test the behavior of his code and ignore said implementation as far as possible.

While I don’t agree with his sample code I totally agree with his statement. But let’s have a look.

[Theory, AutoData]
public void DummyTest(Customer customer)
{
  var calculator = A.Fake&lt;ITaxCalculator&gt;();
  A.CallTo(() =&gt; calculator.Calculate(customer)).Returns(10);
  var factory = new OrderFactory(calculator);
  var order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

This code snippet uses xUnit’s Theories for data driven tests along with AutoFixture’s extension to that feature which “deterministically creates random test data”. AutoFixture creates the Customer object. Then Jimmy uses FakeItEasy to create a mock for the ITaxCalculator, setup the return value for the call to it’s Calculate() method. Passes the calculator to the constructor of the OrderFactory lets the factory create an Order and asserts that the correct tax is set on the order. Nothing spectacular so far (if you are already familiar with data Theories and AutoFixture that is…).

But what happens if the factory no longer uses a calculator? Or if we add another parameter to the constructor? After all the constructor is just an implementation detail that we don’t want to care about. But these changes will break our test code. And this is where Jimmy gets fancy.

Assuming that we already build our code according to the Dependency Injection Pattern and use a DI container in our project (that’s quite some assumption…), we already have the information about how the factory is assembled and which calculator (if any) is to be used in the configuration of said container. So why not use the container to provide us with the factory instead of new’ing it up ourselves?

I took the liberty of streamlining Jimmy’s test a bit more by letting AutoFixture create the customer as well. The ContainerDataAttribute is derived from AutoFixture’s AutoDataAttribute. My sample uses Unity instead of StructureMap but it works all the same. In contradiction to Jimmy’s statement (around 45:46 in the video) Unity does support the creation child containers. Although I have to admit that this is the first scenario where this feature makes sense to me. But that is a different story I’ll save for another day.

[Theory, ContainerData]
public void DummyTest(OrderFactory factory, Customer customer)
{
  Order order = factory.Build(customer);
  order.Tax.ShouldEqual(10);
}

Wow. That’s what I call straight to the point. You don’t care about the creation process of your system-under-test (the factory) or your test data (the customer). You just care about the behavior which is to set the correct tax rate on your order object.
As the tax rate differs from one state to another it might not make sense to test for that specific value without controlling its calculation to some extent (e.g. via a mock object…) but that’s a common problem with HelloWorld!-samples and does not diminish the brilliancy of the general concept.

So let’s see how the ContainerDataAttribute is implemented.

public class ContainerDataAttribute : AutoDataAttribute
{
  public ContainerDataAttribute()
    : base(new Fixture().Customize(
      new ContainerCustomization(
        new UnityContainer().AddExtension(
          new ContainerConfiguration()))))
  {
  }
}

We derive from AutoFixture’s AutoDataAttribute which does the heavy lifting for us (like integrating with xUnit, creating random test data, …). We call the base class’ constructor handing it a customized Fixture object (that is AutoFixture’s lynchpin for creating test data). The customization receives a preconfigured UnityContainer instance as a parameter. As you might have guessed UnityContainer is the Unity DI container. Unity’s configuration system is not as advanced as that of most other containers. Especially it does not bring a dedicated way to package configuration data (like StructureMap’s Registry for example). But you can (ab)use the UnityContainerExtension class to achieve the same result. Just place your configuration code inside your implementation of the abstract Initialize() method and add the extension to the container.

public class ContainerConfiguration : UnityContainerExtension
{
  protected override void Initialize()
  {
    this.Container.RegisterType&lt;IFoo, Foo&gt;();
    this.Container.RegisterType&lt;ITaxCalculator, DefaultTaxCalculator&gt;();
  }
}

It makes sense to re-use as much of your production configuration as possible. But you should consider to modify it in places where the tests might interfere with actual production systems (like sending emails, modifying production databases etc.).

The customization to AutoFixture hooks up two last-chance handlers for creating objects by adding them to IFixture.ResidueCollectors.

public class ContainerCustomization : ICustomization
{
  private readonly IUnityContainer container;
  public ContainerCustomization(IUnityContainer container)
  {
    this.container = container;
  }
  public void Customize(IFixture fixture)
  {
    fixture.ResidueCollectors.Add(new ChildContainerSpecimenBuilder(this.container));
    fixture.ResidueCollectors.Add(new ContainerSpecimenBuilder(this.container));
  }
}

AutoFixture calls these object creators SpecimenBuilders. The first one we hook up is responsible for creating a child container if a test requires an IUnityContainer as a method parameter. The second actually uses the container to create objects.

public class ChildContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ChildContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null || type != typeof(IUnityContainer))
    {
      return new NoSpecimen();
    }
    return this.container.CreateChildContainer();
  }
}
public class ContainerSpecimenBuilder : ISpecimenBuilder
{
  private readonly IUnityContainer container;
  public ContainerSpecimenBuilder(IUnityContainer container)
  {
    this.container = container;
  }
  public object Create(object request, ISpecimenContext context)
  {
    Type type = request as Type;
    if (type == null)
    {
      return new NoSpecimen();
    }
    return this.container.Resolve(type);
  }
}

The NoSpecimen class is AutoFixture’s way of telling its kernel that this builder can’t construct an object for the current request. Something like a NullObject.

Well and that’s it. A couple of dozen lines of code. A superior testing framework (xUnit). A convenient way of creating test data (AutoFixture). A DI container (which should be mandatory for any project if you ask me…). And writing maintainable tests for the correct behavior of your code becomes a piece of cake. I love it! 🙂

You can find the sample code on my playground on CodePlex (solution TecX.Playground, project TecX.BehavioralTesting).

Pipes and Filters

Pipes and filters (or just pipeline) is another common pattern. Oren Eini and Jeremy Likness both have very interesting posts about it on their respective blogs.

While Jeremy’s post aims at the slick creation of pipelines, Oren talks more about the pattern itself and how to implement it in a specific manner (using input and output values of Type IEnumerable<T>).

Some interesting argument came up in the comments to Oren’s post. “Why not use LINQ instead of your custom pipeline (framework)?”

I won’t repeat all the pros and cons here. Better read the posts yourself, they are worth your time!

My opinion on the topic: LINQ is a great tool. But I believe it’s neither the only solution for chains of queries and transformations, nor is it the best in all cases.

I will pick up one point from the “pro LINQ” point of view. “With LINQ you can have different input and output types.”

Sure you can. But who says you can’t do that with pipes and filters just as easily?

We define an abstract base class Filter<TIn, TOut>

public abstract class Filter<TIn, TOut>
{
  public abstract IEnumerable<TOut> Process(IEnumerable<TIn> input);
  public Filter<TIn, TNext> Pipe<TNext>(Filter<TOut, TNext> next)
  {
    return new Pipe<TIn, TOut, TNext>(this, next);
  }
}

and a derived class Pipe<TIn, T, TOut>

public class Pipe<TIn, T, TOut> : Filter<TIn, TOut>
{
  private readonly Filter<TIn, T> source;
  private readonly Filter<T, TOut> destination;
  public Pipe(Filter<TIn, T> source, Filter<T, TOut> destination)
  {
    this.source = source;
    this.destination = destination;
  }
  public override sealed IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    var x = this.source.Process(input);
    var result = this.destination.Process(x);
    return result;
  }
}

With these two as a base we can easily chain filters with different input and output types.

We can also fine-tune the ends of the pipeline a bit so that the code is a nicer read. With a small extension method and a just as small dummy class we can use any enumerable as the starting point for defining a pipeline.

public static Filter<TIn, TOut> Pipe<TIn, TOut>(this IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
{
  return new PipelineStartDummy<TIn, TOut>(enumerable, filter);
}
private class PipelineStartDummy<TIn, TOut> : Filter<TIn, TOut>
{
  private readonly IEnumerable<TIn> enumerable;
  private readonly Filter<TIn, TOut> filter;
  public PipelineStartDummy(IEnumerable<TIn> enumerable, Filter<TIn, TOut> filter)
  {
    this.enumerable = enumerable;
    this.filter = filter;
  }
  public override IEnumerable<TOut> Process(IEnumerable<TIn> input)
  {
    return this.filter.Process(this.enumerable);
  }
}

If we don’t care what comes out of the pipeline and just want to start processing values from the source we can use another extension method that encapsulates Oren’s enumerator magic.

public static void Start<TIn, TOut>(this Filter<TIn, TOut> filter)
{
  var enumerable = filter.Process(null);
  var enumerator = enumerable.GetEnumerator();
  while (enumerator.MoveNext()) { }
}

And now we put it all together and get the following:

new Numbers(3).Pipe(new Square()).Pipe(new Printer()).Start();

Numbers just returns integer values between 1 and the constructor parameter. Square squares all input values and the Printer writes the input to the console. The call to Start() starts the processing.

While it is not the most impressive example implementation of the pipes and filters pattern I believe that it demonstrates how powerful and flexible the pattern can be. And the code is still readable and very explicit about what you are doing. With just a few lines of code you have a composable, easy to understand solution where you can recombine filters in different orders to change the behavior of the pipeline. And you can do just about anything inside of those filters (think validation or enriching the values traversing through the pipeline with data from services or persistent storage). You can even change the type of the values you are processing between steps. This is yet another case of “Like it a lot!”

Specification Pattern

The Specification Pattern. Yet another one I find useful in daily business. But a few enhancements can make it even more useful and a lot nicer to handle.

1, 2, many!

The CompositeSpecification is most often implemented with two fields for spec1 and spec2 or left and right. The AndSpecification and OrSpecifications evaluate these fields with their respective operator.

public class Or<T> : CompositeSpecification<T>
{
  // ...

  public override bool IsSatisfiedBy(T candidate)
  {
    return spec1.IsSatisfiedBy(candidate) || spec2.IsSatisfiedBy(candidate);
  }
}

This can result in a degenerated tree structure when you link many specifications with the same operator (e.g. a.Or(b).Or(c)...Or(n)).

To avoid this I decided to change the behavior of the composite. Instead of two fields it uses a list of children. If you try to link two specifications the code checks wether either one of them is of the same type as the composite and adds the other composites children instead of the composite itself to the list.

Sounds more complicated than it is. Let’s see some code.

public abstract class CompositeSpecification<T> : Specification<T>
{
  private readonly List<Specification<T>> children;
  public CompositeSpecification()
  {
    this.children = new List<Specification<T>>();
  }
  public IEnumerable<Specification<T>> Children
  {
    get { return children; }
  }
  public int Count
  {
    get { return this.children.Count; }
  }
  protected void Add(Specification<T> specification)
  {
    this.children.Add(specification);
  }
  protected void AddRange(IEnumerable<Specification<T>> specifications)
  {
    this.children.AddRange(specifications);
  }
}

public class Or<T> : CompositeSpecification<T>
{
  public Or(Specification<T> specification, Specification<T> other)
  {
    this.Include(specification);
    this.Include(other);
  }
  public override string Description
  {
    get { return " || "; }
  }
  public override bool IsSatisfiedBy(T candidate)
  {
    foreach (Specification<T> specification in this.Children)
    {
      if (specification.IsSatisfiedBy(candidate))
      {
        return true;
      }
    }
    return false;
  }
  private void Include(Specification<T> specification)
  {
    Or<T> or = specification as Or<T>;
    if (or != null)
    {
      this.AddRange(or.Children);
    }
    else
    {
      this.Add(specification);
    }
  }
}

And this is how two specifications can be linked with an Or operator.

public abstract class Specification<T>
{
  // ...
  
  public abstract bool IsSatisfiedBy(T candidate);
  public Specification<T> Or(Specification<T> other)
  {
    return new Or<T>(this, other);
  }
}

In the end this will put all successive Or’s in a single list, which makes finding the right specification in the tree a lot easier.

What do we have?

Generating a human readable representation of the specification tree can be tedious but is often beneficial if you need to see “what you have”. The easiest way to traverse a tree structure is a Visitor. The same pattern is used by Microsoft in their expression trees.

We add an Accept method and a property for the description of the specification to the base classes

public abstract class Specification<T>
{
  // ...
  public abstract string Description { get; }
  public virtual void Accept(SpecificationVisitor<T> visitor)
  {
    visitor.Visit(this);
  }
}

public abstract class CompositeSpecification<T> : Specification<T>
{
  // ...
  public override void Accept(SpecificationVisitor<T> visitor)
  {
    visitor.Visit(this);
  }
}

public class Or<T> : CompositeSpecification<T>
{
  // ...
  public override string Description
  {
    get { return " || "; }
  }
}

Define a base class for the SpecificationVisitor

public abstract class SpecificationVisitor<T>
{
  public abstract void Visit(Specification<T> specification);
  public abstract void Visit(CompositeSpecification<T> composite);
}

And the implementation of a PrettyPrinter becomes as simple as that

public class PrettyPrinter<T> : SpecificationVisitor<T>
{
  private readonly StringBuilder sb;
  public PrettyPrinter()
  {
    this.sb = new StringBuilder(250);
  }
  public override void Visit(Specification<T> specification)
  {
    this.sb.Append(specification.Description);
  }
  public override void Visit(CompositeSpecification<T> composite)
  {
    this.sb.Append("(");
    foreach (Specification<T> child in composite.Children)
    {
      child.Accept(this);
      this.sb.Append(composite.Description);
    }
    int l = composite.Description.Length;
    this.sb.Remove(sb.Length - l, l);
    this.sb.Append(")");
  }
  public override string ToString()
  {
    return this.sb.ToString();
  }
}

And this gives you a nice and friendly printout of your graph

var spec = new AlwaysFalse().Or(new AlwaysTrue().And(new Odd())).Or(new AlwaysTrue());
var printer = new PrettyPrinter<T>();
spec.Accept(printer);
string friendly = printer.ToString(); // (false || (true && Odd) || true)

First time I tried to attach some zipped source code and found out that WordPress won’t let me… Get the source code here (project TecX.Common folder Specifications and the test suite that puts them to use in TecX.Common.Test).

Composition over Inheritance: Custom Collections

Composition over inheritance is a known but unfortunately not very well understood OOD principle.

All too often I see code like this:

public class MyFooList : List<Foo> { /* .. */ }

or this:

public class MySomething
{
  public List<Foo> Foos { get; set; }
}

The derived class adds a method or two. Maybe a property. And it carries all the weight of the super-class List<T>. The property of type List<Foo> is there for anybody to fill with content of unverified qualities. Why?!

List is a powerful class but it does not give you any control over the objects that are added to it. What if your custom collection needs to protect invariants like “You can only add Foos with a Bar > 7?” Or you can only append to your collection but not remove objects from it. There is no easy way to model this when you are inheriting from List<T> or providing raw lists as part of your API.

If you don’t need the majority of the 30-some methods and the ~10 properties why would you show them? They just blur your interface. A simple custom collection that only offers what your consumers need to know is simple enough to write. If you implement IEnumerable<T> you can even unleash all the power of LINQ to select items from your collection.

If you have many or very complex invariants encapsulate them in validators to protect your collection. Let the collection tell its consumers about the rules a Foo instance violates when they are trying to add it to the collection.

[TestMethod]
public void Should_NotifyOnViolatedInvariants()
{
  var collection = new MyFooCollection(new IFooValidator[] { new NotNullValidator(), new MinBarValidator(7) });
  
  IFooValidator violatedInvariant;
  Assert.IsFalse(collection.Add(null, out violatedInvariant));
  Assert.IsInstanceOfType(violatedInvariant, typeof(NotNullValidator));

  Assert.IsFalse(collection.Add(new Foo() { Bar = 1 }, out violatedInvariant));
  Assert.IsInstanceOfType(violatedInvariant, typeof(MinBarValidator));
  Assert.IsTrue(collection.Add(new Foo { Bar = 8 }, out violatedInvariant));
}

public class MyFooCollection : IEnumerable<Foo>
{
  private readonly IEnumerable<IFooValidator> validators;
  private readonly List<Foo> foos;
  public MyFooCollection(IEnumerable<IFooValidator> validators)
  {
    this.validators = validators ?? new IFooValidator[0];
    this.foos = new List<Foo>();
  }
  public bool Add(Foo foo, out IFooValidator violatedInvariant)
  {
    violatedInvariant = this.validators.FirstOrDefault(v => v.ViolatesInvariant(foo));
    if (violatedInvariant == null)
    {
      this.foos.Add(foo);
      return true;
    }
    return false;
  }
  public IEnumerator<Foo> GetEnumerator()
  {
    return this.foos.GetEnumerator();
  }
  IEnumerator IEnumerable.GetEnumerator()
  {
    return this.GetEnumerator();
  }
}
public interface IFooValidator
{
  string Description { get; }
  bool ViolatesInvariant(Foo foo);
}
public class MinBarValidator : IFooValidator
{
  private readonly int minBar;
  public MinBarValidator(int minBar)
  {
    this.minBar = minBar;
  }
  public string Description
  {
    get { return string.Format("foo.Bar must be greater than {0}", this.minBar); }
  }
  public bool ViolatesInvariant(Foo foo)
  {
    return foo.Bar <= this.minBar;
  }
}
public class NotNullValidator : IFooValidator
{
  public string Description
  {
    get { return "foo must not be null."; }
  }
  public bool ViolatesInvariant(Foo foo)
  {
    return foo == null;
  }
}
public class Foo
{
  public int Bar { get; set; }
}

You are still using the list under the covers. But your custom collection (just a couple of lines long!) is now stream-lined to only provide the functionality that is really needed. If you expose a raw List<T> anybody can modify the contents of that list. There is no way to enforce the collections invariants. With the custom collection your intentions became clearer and you made using your API safer. Isn’t that worth the bit of extra effort?

Rebuttal: Composite Pattern

Oren Eini (aka Ayende Rahien) has an interesting series on his blog where he reviews the GoF Design Patterns and how they apply (or don’t) to modern software development.

As part of this series he also reviews the Composite Pattern and finishes with the following advice:

Recommendation: If your entire dataset can easily fit in memory, and it make sense, go ahead. Most of the time, you probably should stay away.

I find his conclusion somewhat limited as he only seems to take data storage/structures into account. But what about code that “does” things?

If you consider the following interface:

public interface IWriter
{
  void Write(Foo foo);
}

there may be a lot of implementations that write somewhere. Like the local file system. A database. A WCF service or any number of other targets. And what if you want to write to several of those targets and not only one? Do you want to change your code to handle a List<IWriter>? Everywhere? No? Me neither. Instead I prefer to leave my existing code that knows how to write to a single target as is and introduce a composite writer that does the job of writing to multiple targets for me.

public class CompositeWriter : IWriter
{
  private readonly List<IWriter> writers = new List<IWriter>();
  public void Write(Foo foo)
  {
    foreach(var writer in this.writers)
    {
      writer.Write(foo);
    }
  }
  public void Add(IWriter writer)
  {
    this.writers.Add(writer);
  }
}

The sample ignores validation or error handling (Does the composite ignore errors in its targets and just continues? Do the targets have to handle the errors themselves etc.). It also ignores the possible complexity of the Foo it has to write. This is where I agree with Oren. If this structure is large and you need to duplicate it for every Write() you are screwed.

I also ignore the delay that might be introduced by writing to an arbitrarily large set of targets. But you can also handle that inside the composite. Maybe you introduce timeouts for calling a single writer. Or you write a decorator that calls a writer on a background thread if that fits your needs. You can make your solution as complex or simple as you have to. The fact is: Your original code won’t know about that complexity. It just calls an implementation of your IWriter interface. I think this is what SRP is about.

You isolate the callers from the possibly complex task of calling out to multiple targets. Your composite is responsible for dealing with that complexity. And that is it’s only responsibility.

From Static to Flexible

Back in the “good old days” static classes and singletons where widely used patterns. They could be called from anywhere and you didn’t have to spend a thought on dependency management. Or unit testing. Because most often you couldn’t test code that called methods on those objects. They usually needed config files, databases, web-services, message queues, more files, network shares … just about anything. And none of that plays nice with unit tests. Pity that these days can also be called “earlier this year”…

As a “tame” example of such a static object think of the File class. Now imagine you have a component that uses File.Exists(). How do you test that code? Do you really want to create a file whenever you try to run a test of your component?

Lets look at a sample to show how you can still test that code.

public class MyComponent
{
  public void DoIt(string pathToFile)
  {
    if(File.Exists(pathToFile)) { /* ... */ }
    else { /* ... */ }
  }
}

In order to simulate the situation where the file exists you would have to create said file. That’s not what unit tests are for. They should be almost effortless to write. Wrestling with the file systen in a unit test scenario is definitely not effortless.

A common recommendation is to write a wrapper for the File class which is non-static. That would look like this:

public class FileWrapper
{
  public bool Exists(string path)
  {
    return File.Exists(path);
  }
}

But that’s not the whole story. We need a way to simulate the result of FileWrapper.Exists() or we would just add code but won’t gain anything. To do that we lend ourselves the solution Microsoft used for HttpContext. We introduce a base class with all virtual properties and methods.

public class FileBase
{
  public virtual bool Exists(string path)
  {
    throw new NotImplementedException();
  }
}
public class FileWrapper : FileBase
{
  public override bool Exists(string path)
  {
    return File.Exists(path);
  }
}

Now we change our component to use some instance of FileBase instead of the static File class.

public class MyComponent
{
  private readonly FileBase file;
  public MyComponent(FileBase file)
  {
    this.file = file;
  }
  public void DoIt(string pathToFile)
  {
    if(this.file.Exists(pathToFile)) { /* ... */ }
    else  { /* ... */ }
  }
}

We can use Moq or some hand-crafted test double that implements FileBase to simulate the existance or non-existance of said file very easily.

To make things a bit nastier imagine that MyComponent is used throughout a large legacy codebase. You just changed the signature of it’s constructor. Now you would have to change all calls to that constructor immediately or the system will be broken. That’s not an option.

But if we add a constructor overload that delegates to the constructor we just created and uses the default implementation for FileBase we can leave all calls to the default constructor in place but still have the improved testability of the constructor that follows the Dependency Injection Pattern.

public class MyComponent
{
  private readonly FileBase file;
  public MyComponent() : this(new FileWrapper())
  {
  }
  public MyComponent(FileBase file)
  {
    this.file = file;
  }
  // more code
}

I know that this solution is not pure. The DI Pattern advocates the usage of one constructor only that takes all the dependencies of a class. I prefer that as well. But “legacy enterprise systems” are rarely what I’d call clean and sometimes you have to take some dirty intermediate steps towards clean solutions. With the technique outlined above you can improve such systems one step at a time. You can control how far these improvements are visible by adding or removing constructors that create default implmentations for their dependencies and replacing calls to these default constructors one at a time. Rome wasn’t built in a day…

Settings Objects

Configurability is a member of the *-ability gang. Influence how your components work at runtime. Store configuration information somewhere. Either in code, in a config file or in some type of database. Load it at runtime and use it in your application.

You can hard-code the way these values are retrieved directly in your components. But that makes these components hard to test and limits your flexibility (another gang-member) later in the process. You can hand them to your component one-by-one either as constructor parameters or method parameters which tends to get very noisy if you need more than a few values. The Parameter Object Refactoring can reduce that noise a lot. Yet it does not solve the problem of how to get the values into that parameter object.

You can use some custom ConfigurationManager or extend the .NET configuration engine. But what about defining defaults for your configuration? Defaults for different scenarios maybe? Meaningful defaults can greatly reduce the clutter in your configuration.

I found another approach very useful: Encapsulate the way your configuration values are retrieved along with the defaults into something I call Settings Objects.

They contain a number of public virtual properties that are used to retrieve the desired configuration values.

public class DemoSettings
{
  private static class Defaults
  {
    public static readonly TimeSpan Timeout = TimeSpan.FromSeconds(30);
  }
  public virtual TimeSpan Timeout
  {
    get
    {
      string key = GetKeyFor(() => Timeout);
      string valueFromConfigFile = ConfigurationManager.AppSettings[key];
      TimeSpan ts;
      if (!string.IsNullOrEmpty(valueFromConfigFile) &&
          TimeSpan.TryParse(valueFromConfigFile, CultureInfo.CurrentCulture, out ts))
      {
        return ts;
      }
      return Defaults.Timeout;
    }
  }
  private string GetKeyFor(Expression<Func> memberSelector)
  {
    MemberExpression expression = (MemberExpression) memberSelector.Body;
    string key = this.GetType().Name + "." + expression.Member.Name;
    return key;
  }
}

The code in this demo tries to load configuration values from a config file following an easy convention: Values are stored as <NameOfTheSettingsClass>.<PropertyName> in the appSettings section.

If no value is found or the value does not meet the format expectations a hard-coded default is used. If performance is critical you can use a private member or any kind of cache to store the retrieved value. You can implement expiration for your cached values if you want to.

If you derive from DemoSettings you can use different default values in your derived classes. Or you can entirely hard-code values. Which is especially useful for testing scenarios. Or you can turn that procedure upside-down and design a base-class that uses hard-coded values (e.g. in the beginning of the development of a new component) and derive settings objects with different methods for information retrieval later as needed.

You can read from a database instead of a config file. Or you can implement a “three strikes” scenario: Check wether the value is defined in your local config file. If not check the database. If there is no value in the database use the default. By this, values in a config file override values in a database which in turn override the hard-coded defaults.

Btw.: If you are using a database caching might make sense to avoid too many round-trips. And if you are already on .NET 4.5 you can use the CallerMemberNameAttribute instead of an Expression to figure out the name of the property you want to retrieve.

You can argue that Settings Objects violate the Single Responsibility Principle and I agree with you. A settings DTO and a separate Loader class of some kind are more SRP’ish. But in this case I value the improved useability of settings objects more than following SRP. They are small, self-contained objects. If you don’t need the additional configurability stick with hard-coded values. That minimizes the effort until you really have a need for that flexibility (and someone is willing to pay you for it). You don’t have to write another “framework” to fill your settings objects with values from various sources. You can decide on a case-by-case basis wether you need support for config-files or databases or something else or if hard-coded values are good enough for now. I like this approach and it served me well on several projects.

The Builder Pattern

A couple of years ago Jan van Ryjswyck published some articles on the Builder Pattern.

In his series he describes the pattern in the context of test data creation

While it is absolutely true that this pattern is very useful when you have to repeatedly create test data with small variations it does not stop there!

Frameworks like Enterprise Library or AutoFac use the same pattern to create their configuration settings.

IConfigurationSource configSource = new DictionaryConfigurationSource();
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();

builder.ConfigureLogging()
  .LogToCategoryNamed("DataAccess")
  .SendTo.RollingFile("MyListener")
    .FormatWith(
      new FormatterBuilder()
        .TextFormatterNamed("MyFormatter")
        .UsingTemplate("{timestamp} {message}"));

builder.UpdateConfigurationWithReplace(configSource);
IUnityContainer container = new UnityContainer();
IContainerConfigurator configurator = new UnityContainerConfigurator(container);
EnterpriseLibraryContainer.ConfigureContainer(configurator, configSource);

But it still doesn’t stop there! If you are developing a component that others will have to use, builders are a way to use Dependency Injection in your code but neither enforce the usage of a specific DI container nor the usage of the DI Pattern at all on these consumers.

Mark Seemann, author of Dependency Injection in .NET, outlines how this can be done in his answer to a question on StackOverflow.

You can encapsulate meaningful default values in your builders whitout poluting your API with various constructor overloads. This allows novice consumers to use your components easily while experts can swap the defaults for custom implementations if neccessary or even use a DI container to wire them up and bypass the builders.