Category Archives: Uncategorized
The Normalization of Deviance

The Normalization of Deviance is a concept that

describes a situation in which an unacceptable practice has gone on for so long without 
a serious problem or disaster that this deviant practice actually becomes the 
accepted way of doing things. 

credit: challenger explosion

You’re familiar with this concept already. You’ve encountered everytime you’ve seen someone doing they know to be wrong but justifying it with “we’ve never had a problem before.” This is the person who consistently buys things s/he can’t afford with credit cards. This is the person goes out drinking and drives home. This is the software engineer who writes code without tests. This is the business that ignores through continual deferment technical debt issues raised by its engineers.

As software engineers, we know we should remove dead code in projects. We know that we should automate software deployment. We know we should provide reliable automated tests for our features. We know we should build and test our software on a machine other than our personal dev box. We know that we should test our software in a prod-like environment that is not prod.

Do you do these things in your daily work? Does your organization support your efforts?

How do you identify the Normalization of Deviance?

One of the challenges you will face identifying Normalization of Deviance is the fact that things you do on a daily basis are… normal.

There are several “smells” that could indicate that your organization is having problems.

  • You have “official” policies that do not describe how you actually do work.
  • You have automation that routinely fails and requires handholding to reach success.
  • You have unit or integration tests that fail constantly or intermittently such that your team
    does not regard their failure as a “real” problem.
  • You have difficult personalities in key positions who turn conversations about their effectiveness
    into conversations about your communication style.

All of these issues are “of a kind,” meaning that they are all examples of routinely accepted failure. This is obviously not an exhaustive list.

Why is it a problem?

You will eventually have a catastrophic failure. Catastrophic failures seldom occur in a vacuum. Usually there are a host of seemingly unrelated smaller problems that part of daily life. Catastrophic failure usually occur when the stars align and the smaller issues coalesce in such a way that some threat vector is allowed to completely wreck a process. This is known as the Swiss Cheese Model of failure. I first learned about the Swiss Cheese Model in a book called The Digital Doctor, which is a holistic view of the positive and negative effects of software in the medical world. This book is well-worth reading. The section on the deadly consequences of “alert fatigue” would be of special interest to software engineers and UX designers.

A Real World Example

A fascinating case of software failure that destroyed a company overnight is the story of Knight Capital. In a writeup, Doug Seven lays the blame at the lack of an automted deployment system. I agree, though I think the problems started much, much earlier. Doug writes:

The update to SMARS was intended to replace old, unused code referred to as “Power Peg” – 
functionality that Knight hadn’t used in 8-years (why code that had been dead for 
8-years was still present in the code base is a mystery, but that’s not the point). 

It’s my point. The Knight Capital failure began 8 years before when dead code was left in the system. I’ve had conversations with The Business where I’ve tried to justify removing dead code. It’s hard to make them understand what the danger is. “It’s not hurting anything is it? It hasn’t been a problem so far has it?” No, but it will be.

The second nail in Knight Capital’s coffin was that they chose to repurpose an old flag that had been used to activate the old functionality. As Doug Seven writes:

The code was thoroughly tested and proven to work correctly and reliably.
What could possibly go wrong?


The final nail is that Knight Capital used a manual deployment process. They were to deploy the new software to 8 servers. The deployment technician missed one. I don’t actually know this, but I can just imagine the technician staying after-hours to do the upgrade and wanting nothing more than to go home to his/her family or happy-hour or something.

At 9:30 AM Eastern Time on August 1, 2012 the markets opened and Knight began processing 
orders from broker-dealers on behalf of their customers for the new Retail Liquidity 
Program. The seven (7) servers that had the correct SMARS deployment began processing 
these orders correctly. Orders sent to the eighth server triggered the supposable 
repurposed flag and brought back from the dead the old Power Peg code.

There were more issues during their attempt to fix the problem, but none of it would have happened except that these 3 seemingly minor problems coalesced into a perfect storm of failure. The end result?

Knight Capital Group realized a $460 million loss in 45-minutes... Knight only 
has $365 million in cash and equivalents. In 45-minutes Knight went from being the 
largest trader in US equities and a major market maker in the NYSE and NASDAQ to bankrupt.

How do you fix it?

I’m still figuring that out. Luckly, Redacted Inc doesn’t have too many of these sorts of problems so my opportunities for practice are few, but here are my thoughts so far.

The biggest challenge in these scenarios is that people are so used to accepting annoying or unreliable processes as normal that they cease to see them as daily failure. It’s not until after disaster has struck that it’s clear that accepted processes were in fact failures. Nobody at Knight Capital was thinking “jeez, that dead code is really hurting us.”

There’s an old management adage: “You can’t manage what you can’t measure.” You can start addressing NOD issues by identifying risky patterns and practices that your organization uses in its daily standard operating procedure. If you can find a way, assign a cost to them. Consider ways in which these normal failures could align to cause catastrophy. If you have a sympathetic ear in management, start talking to them about this. Introduce your manager to these concepts. Tell them about Knight Capital. Your goal is to get management and The Business to see the failures for what they are. By measuring the risks and costs to your organization of acceptable failure you will have an easier time getting your voice heard.

Most importantly, come up with a plan to address the issues. It’s not enough to say “this is a problem.” You need to say “This is a problem and here are some solutions.” Go further still, show how you get your team from “here” to “there.” Try to design solutions that make the day to day work easier, not harder. Jeff Atwood calls this the Pit of Success. His blog scopes this concept to code, but it applies to processes as well. You want your team to “fall into” the right way to do things.

Another potential source of positive feed back are new members to your team. It may be hard to get them to open up for fear of crossing the wrong person, but they are new to your organization and they will see more clearly the things that look like failures waiting to happen. Nothing you are doing is normal to them yet.

Dependency Injection Patterns


I’ve seen several different approaches to Dependency Injection, each of which have their own strengths and weaknesses. I run an internship program in which I teach these patterns to college students. I believe each pattern and anti-pattern has its pros and cons, but my observation is that even experienced devs haven’t fully articulated the cost-benefit of each approach to themselves. This is my attempt to do that.

Property Injection

“Property Injection” or “Setter Injection” refers to the process of assigning dependencies to an object through a Property or Setter method.


public class Widget { public Bar Bar {get; set; } public void Foo(string someValue) { Bar.SomeMethod(someValue); } }

There are pros and cons to this approach.


  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified if you are looking at the class.


  • Temporal Dependency

What is a temporal dependency? Perhaps it’s best to illustrate with an example. Here is my first attempt at using the class as defined above.

    var widget = new Widget();
    widget.Foo("Hello World!");

Did you spot the problem? I never set Bar. When I attempt to call Foo I’ll get the oh-so-helpful NullReferenceException. I find out after I try to use the class that I have a missing dependency. I have to open the class to find out which dependency is missing. Then I have to modify my code as follows:

var widget = new Widget();
widget.Bar = new Bar();
widget.Foo("Hello World!");

It’s called a temporal dependency because I have to set it before I can call any methods on the class. What’s worse, the API of the class doesn’t give me any indication that anything is wrong until after I attempt to run it.

Method Injection

“Method Injection” refers to passing dependencies to the method that uses them.


public class Widget { public void Foo(Bar bar, string someValue) { // snipped } }


  • No temporal dependencies
  • dependencies are clearly communicated via the API
  • Easily replace dependencies with fakes for testing purposes


  • Method signature explosion
  • Method signature fragility
  • Clients have to concern themselves with the classes dependencies

What are method signature explosion and fragility? Method Signature Explosion means that arguments to my method signatures will increase as dependencies change. This leads to Method Signature Fragility which means that as dependencies change, clients of the method have to change as well. In other words, we lose the benefit of encapsulated logic.

Constructor Injection

Constructor Injection is the process of making dependencies available to a class through its constructor.


public class Widget { private Bar _bar; public Widget(Bar bar) { _bar = bar; } public void Foo(string someValue) { _bar.SomeMethod(someValue); } }


  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified through the API
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility


  • none – other than those inherent to the nature of using Dependency Injection in the first place.

Of the three approaches listed so far, I strongly prefer Constructor Injection. I see nothing but benefits in this approach.

Lazy Injection

If you’re getting started with Dependency Injection, I strongly recommend researching a Dependency Injection Framework such as Ninject to make constructing your class hierarchies easy. If you’re not ready to bite that off you might consider using Lazy Injection. This is a technique by which your constructor arguments are given default values so that you can instantiate your class with all of your default system values at run-time, but pass fakes during test-time.


public class Widget { private Bar _bar; public Widget(Bar bar = new Bar()) { _bar = bar; } public void Foo(string someValue) { _bar.SomeMethod(someValue); } }

You can do this with Property Injection as well, mitigating some of the cons of that approach. You are still left opening the class to figure out how and what to fake however.


public class Widget { public Bar Bar {get; set; } public class Widget() { Bar = new Bar(); } public void Foo(string someValue) { Bar.SomeMethod(someValue); } }

Service Locator

Service Locator is widely considered to be an anti-pattern. To understand why, read “ServiceLocator is an Anti-Pattern“.
Service Locator involves making an open-ended registry of dependencies widely available to any class that wants them.


public class Widget { public Bar Bar {get; private set; } public class Widget() { Bar = ServiceLocator.Get<Bar>(); } public void Foo(string someValue) { Bar.SomeMethod(someValue); } }

On the surface this looks awesome.


  • Keeps method signatures tidy.
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility


  • Dependencies are not clearly specified through the API
  • Because my API doesn’t communicate my dependencies, I have to understand the classes’ implementation details to properly test it.
  • It encourages dependency explosion inside the class. This is another way of saying that a class with too many constructor arguments is a “smell” and I lose the benefit of being confronted with that “smell” if I use ServiceLocator.

Despite these flaws, it is sometimes useful as a scaffolding mechanism to introduce a Dependency Injection Framework into an application that was not designed with Dependency Injection in mind. I want to stress that I believe this pattern should only be used as an interim step on the road to full Dependency Injection Framework support in legacy applications. You should make it a point to remove ServiceLocator as quickly as possible after introducing it.

Closing Thoughts

These patterns are by no means exhaustive, but they are the common one’s I’ve seen over the years.

If you are using a Dependency Injection Framework (my favorite is Ninject), some of the cons of these approaches may be mitigated by the Framework. This may change the equation with respect to which method is appropriate for your use-case.

Odin-Commands 0.2.1 Released: Before & After Execute Hooks

Today I released Odin-Commands 0.2.1 on

What’s New?

I was writing a CLI command when I released it would be nice to be able to set default values for the Common Parameters on the command prior to executing the action.
The difficulty is that some of the default parameter values are composed from other parameter values but all of them are settable by the user.
To achieve this goal I added overridable OnBeforeExecute and OnAfterExecute methods to the Command class.

How do I use it?

public class MyCommand: Command { protected override void OnBeforeExecute(MethodInvocation invocation) { ApplyDefaultValues(); // or do other stuff prior to executing the invocation. } protected override int OnAfterExecute(MethodInvocation invocation, int result) { // you can return a different exit code if you need to. return base.OnAfterExecute(invocation, result); } }
Odin 0.2 Released With a New Feature: Common Parameters

What are Common Parameters?

Common parameters are parameters that are repeated across multiple actions in a CLI context. For example, I might have a CLI that has takes a --verbose flag to switch on verbose output for all of my actions. Rather than require that the CLI developer add a bool verbose switch to every action in his/her Command implementation, Odin allows you declare a property on the Command implementation as a [Parameter].


[Description("Provides search capabilities for books.")]
 public class SearchCommand : Command
     [Description("The order to sort the results of the search.")]
     public SortBooksBy SortBy { get; set; }

     [Description("Searches books by author.")]
     public void Author(
         [Description("The author of the book being searched for.")]
         string author)
         Logger.Info("Find books by author '{0}'; sort results by '{1}'.\n", author, SortBy);

     [Action(IsDefault = true)]
     [Description("Searches books by title.")]
     public void Title(
         [Description("The title of the book being searched for.")]
         string title)
         Logger.Info("Find books by title '{0}'; sort results by '{1}'.\n", title, SortBy);

     [Description("Searches books by release date.")]
     public void ReleaseDate(
         [Description("The release date from which to search.")]
         DateTime? from = null,

         [Description("The release date to which to search.")]
         DateTime? to = null)
         Logger.Info("Find books by release date: '{0}' - '{1}'; sort results by '{2}'.\n", from, to, SortBy);

In the above example, SortBy is available to all of the actions defined on the Command. It can be set at the command-line by passing the --sort-by <value> switch. Odin will parse and set the switch before executing the action.

This functionality is now available in version 0.2 of the Odin-Commands nuget package.

TeamCity & “Works on My Machine”

I love TeamCity as a build/ci tool. I really do. However, I have noticed from time to time when I go searching their bug-tracking system that I’m in the midst of reporting a bug they had previously fixed. I’ve seen this more than once so this led me to wonder aloud on twitter if the TeamCity developers practiced TDD.

I followed the link to their blog post. In their discussion of how they do Continuous Delivery within their organization I would draw your attention to step 2:

While the build is running, two developers from the team (Duty Devs) review all commits made by the team during the day and if they both agree commits won’t break critical parts of the application, such as areas responsible for running and scheduling builds and build agent auto-upgrade, they put a special tag on the build marking it as “safe for deployment”.

emphasis added

Now let me draw your attention to Jeff Atwood’s wonderfully snarky blog post, Works on My Machine. Step 3:

Cause one code path in the code you’re checking in to be executed. The preferred way to do this is with ad-hoc manual testing of the simplest possible case for the feature in question. Omit this step if the code change was less than five lines, or if, in the developer’s professional opinion, the code change could not possibly result in an error.

emphasis added

Again, I love TeamCity. I think it’s a great product. I think it’s build-chains feature makes it far and away a better CI tool than any of the alternatives–especially in an enterprise-y SOA context. It is not my intention to shame them. Still, the producers of a tool widely used to perform the Continuous Integration side of TDD does not itself practice TDD.


Announcing Odin-Commands 0.1.0

In the .NET space there are a number of good libraries to handle run-of-the-mill command line argument parsing.
My current favorite is a nuget package called simply CommandLineParser.
So why write a new one?

In short, current .NET command-line libraries focus only on parsing the args.
You are left reponsible for interpreting the args to call the right methods with the correct arguments.
What if CLI’s could be as easy to structure and execute as ASP .NET MVC applications?

Try it out!

Install-Package Odin-Commands


Feedback Welcome

Inspired By Thor

I’ve done some work with Ruby over the last couple of years and I was really impressed with the feature set offered by a ruby project called thor
In addition to a declarative approach to binding options to actions and arguments, thor supports the concept of subcommands.
We’ve seen subcommands used to great effect in command-line programs such as git and nuget, but current command line parser packages
offer little help with this feature.

Inspired by Convention Over Configuration

In ASP .NET MVC, urls are routed to the appropriate controller and action by convention. http://mysite.domain/Home/Index is understood to route to a controller called “Home” and invoke a method called “Index.”
In addition, little manual wiring is required because ASP .NET MVC can discover and instantiate the controller easily at runtime.
I wondered if it would be possible to use a combination of reflection and convention to create a command-line application in C#.

Show Me The Code

Setup Code

Write a class that inherits Command. You can also register SubCommands.
Add methods with [Action] attributes to indicate that they are executable.

public class RootCommand : Command { public RootCommand() : this(new KatasCommand()) { } public RootCommand(KatasCommand katas) { base.RegisterSubCommand(katas); } [Action] [Description("The proverbial 'hello world' application.")] public int Hello( [Description("Override who to say hello to. Defaults to 'World'.")] [Alias("w")] string who = "World") { this.Logger.Info($"Hello {who}!\n"); return 0; } [Action] [Description("Display the current time")] public void Time( [Description("The format of the time. (default) hh:mm:ss tt")] [Alias("f")] string format = "hh:mm:ss tt") { this.Logger.Info($"The time is {DateTime.Now.ToLocalTime():format}\n"); } } [Description("Provides some katas.")] public class KatasCommand : Command { [Action(IsDefault = true)] public int FizzBuzz( [Alias("i")] int input ) { FizzBuzzGame.Play(this.Logger, input); return 0; } [Action] public int PrimeFactors( [Alias("i")] int input ) { var result = PrimeFactorGenerator.Generate(input); var output = string.Join(" ", result.Select(row => row.ToString())); this.Logger.Info($"{output}\n"); return 0; } }

To execute the program, just take call Execute(args).

The Program

class Program
    static void Main(string[] args)
        var root = new RootCommand(new KatasCommand());
        var result = root.Execute(args);

What Do I Get For My Trouble?

You get a command line executable that can be invoked like so:

exe hello --who "world" # explicit invocation exe hello -w "world" # argument alias exe hello "world" # implicit argument by order exe katas fizz-buzz --input 11 # explicit subcommand invocation exe katas --input 11 # subcommand + default action exe katas -i 11 # subcommand + default action + argument alias exe katas 11 # subcommand + default action + implicit argument by order exe katas prime-factors --input 27 # subcommand + non-default action + explicit argument

A powershell module for managing your ruby environments on Windows.

I’ve been working with Ruby in a Windows environment for a little over a year now. I’m sad to say that community support for Windows developers is lackluster. We are second-class citizens.

The most frustrating example of this is the lack of decent ruby version switchers. rvm doesn’t install on Windows at all. Ditto for rbenv. uru is a valiant attempt, but it is cumbersome to install and it’s API is less than intuitive.

The Need

This wouldn’t much of an issue if ruby installations were backwards compatible, but that is not the case. Even minor version releases of Ruby can incur breaking changes ruining your execution environment.

For development purposes, it’s a good idea to install the new ruby version, switch your ruby environment, then run all your tests on all your projects to verify compatibility. If you need to roll back, just switch your ruby environment and everything is good.

The Strategy

As I started digging into how tools like rvm and rbenv work, I became surprised a the difficuly of reimplementing them on Windows. Aside from the installation features (e.g. rvm install ruby-version), ruby version management is basically just editing the PATH variable. In other words, the great barrier, the monumental technical challenge that prevents anyone from developing an easy-to-install , easy-to-use ruby version switcher is: string manipulation.

The Requirements

A ruby version switcher needs to know the location of installed rubies. It needs to be able to alter the PATH for the current session such that the desired ruby is the one being used.


It is not the norm for Windows developers to think about altering their terminal session. It is the norm that alterations to the PATH are permanent. Reorienting our thinking around editing our Session as against our Environment has many benefits which I won’t go into here–except to say that it makes the issue of version switching much simpler.

My Solution

I wrote a powershell module called psundle-ruby (compatible with psundle) to discover and manage ruby versions. If you have not looked at psundle, I encourage you to do so as it makes installation of this module as simple as:

Install-PsundleModule 'crmckenzie'  'psundle-ruby'

If you have already installed rubies in your Windows environment, you can execute Register-RubiesInPath to make Psundle aware of them. Otherwise you can invoke Register-Ruby for each ruby location not found in your PATH.

Invoke Use-Ruby to switch to the desired version. The argument to Use-Ruby is an expression. The command switches to the first ruby it finds that matches the expression.

For example, if I have the following rubies installed on my machine:
* ruby-1.9.3
* ruby-2
* ruby-2.1
* ruby-2.2

I can invoke Use-Ruby "1" to switch to ruby-1.9.3. Invoking Use-Ruby "2" will switch me to ruby-2.

Invoke Set-DefaultRuby to permanently alter your PATH variable to automatically select the chosen ruby.

Committment to Maintain

I commit to maintaining this powershell module through the end of 2016. If you find issues, please report them or (even better) submit a pull-request. I will reevaluate my committment at the end of 2016 based on the level of interest and usage of this module.

Announcing Psundle: A Vundle-like Module Manager for Powershell

Vim & Vundle

I finally bit the bullet and learned to use Vim competently. One of the things I was really impressed by in the Vim space is a plugin called Vundle. Vundle is a package manager for Vim plugins. At first, swimming in a sea of package managers, I was loathe to learn another one–but Vundle is extremely simple. It uses github as a package repository. “Installing” a package is basically as simple as running a git clone to the right location. Updating the package is a git pull. Security is managed by github. Genius.


As a developer on Windows I find Powershell to be an extremely useful tool, especially when running in an excellent terminal emulator such as ConEmu. One of the problems that Powershell has is that there is no good way to install modules. The closest thing is PsGet.

What’s wrong with PsGet?

Nothing. PsGet is great. However, not every powershell module can be made public, and not every powershell module developer goes through the process of registering their modules at PsGet.

Introducing Psundle

I thought to myself, “Hell, if Vundle can install modules directly from github, I should be able to implement something similar in Powershell” and Psundle was born.

Psundle is a package manager for Powershell modules written in Powershell. It’s only dependency is that git is available in the PATH.


Psundle is an alpha-quality product. It works, but API details may change. It will improve if you use it and submit your issues and/or Pull Requests through github.


You can install Psundle by running the following script in powershell:

iex ((new-object net.webclient).DownloadString(''))

In your powershell profile, make sure you Import-Module Psundle. Your powershell profile is located at


I advise that you don’t just run some dude’s script you found on the internet. Review the script first (it’s easy to understand). Please, please, please report any installation errors with the self-installer.

Installing Powershell Modules With Psundle

Install-PsundleModule "owner" "repo"

For example, if you want to install the module I wrote for managing ruby versions on Windows, you would run:

Install-PsundleModule "crmckenzie" "psundle-ruby"

What does this accomplish?

As long as you have imported the Psundle module in your profile, Psundle will automatically load any modules it manages into your powershell session.

Other Features


Executing Show-PsundleEnvironment gives output like this:

Module Path Updates HasUpdates
Psundle C:\Users\Username\Documents\WindowsPowerShell\Modu… {dbed58a Updating readme to resolve installation … True
ChefDk C:\Users\Username\Documents\WindowsPowerShell\Modu… False
Ruby C:\Users\Username\Documents\WindowsPowerShell\Modu… False
VSCX C:\Users\Username\Documents\WindowsPowerShell\Modu… False


I can update a module by running:

Update-PsundleModule "owner" "repo"

If I’m feeling brave, I can also Update-PsundleModules to update everything in one step.

Requirements For Installed Modules

Because Psundle ultimately just uses git clone to install powershell modules, Powershell modules in github need to be in the same structure that would be installed on disk.

Primarily, this means that the psm1 and psd1 files for the module should be in the repo root.

Committment to Develop

I’m making a blind committment to maintain this module through the end of 2016. “Maintenance” means I will answer issues and respond to pull requests for at least that length of time.

Whether I continue maintaining the module depends on whether or not people use it.

Chef, Windows, & Docker

At Redacted Industries we use Chef to deploy our applications to various environments. An environment for us is a complete set of servers and applications configured to talk to one another. Environments are designed to mirror prod as much as possible.

The majority of our applications are written in C# and target the Windows operating system. Accordingly, developers are assigned a windows VDI and given access to a spate of tools for Windows-based development.

Our DevOps group on the other hand primarily works in Chef & Ruby. Their standard-issue hardware is a Macbook Pro.

Ruby on Windows

Ruby is less than awesome on Windows. There are a host of issues, but the main problem is that gem does not want to install binaries to the host OS. Rather, gems that require C-compilation are built from source when they are installed on the target OS. Gem developers do not always test their C-compilation on both Linux & Windows so Windows compilation is often neglected.

The community attitude toward this problem tends toward “Show me the PR!” This is a typical attitude in open-source, but few modern developers have the stamina to master the vagaries of C-compilers so the reality is these sorts of problems are seldom touched.

Despite these problems, I am able to develop Ruby applications on Windows with relative ease. It takes some time and effort to learn where the dragons are and slay them, but it can be done.

ChefDK & Embedded Rubies

Ruby devs often want to build gems against different versions of Ruby. Controlling which version of ruby you’re using at any one time is a challenge. There are tools such as rvm & rbenv to help but the tools are not awesome. To further complicate matters, OS/X comes with its own embedded Ruby as does ChefDK.

It is a challenge to keep straight which code is supposed to be installed in and run in the context of which version or Ruby, especially since Chef can be used to install versions of Ruby different than what it is running under. Further, the ChefDK is not designed to play-nice with other Rubies. In discussions with the developers at OpsCode, they say that the ChefDK is designed for people who are not going to be developing Ruby applications in any environment other than Chef. It becomes problematic when the Chef docs tell you to install certain gems and you end up installing them into the wrong Ruby. Gah!


If you haven’t read about Docker yet, stop reading this blog and go read about Docker. Docker lets you create lightweight VMs known as containers. A container isn’t really a VM–it’s a process. I think of it as a process that thinks it’s a VM.

What if we could create a docker container pre-configured with the ChefDK such that the Chef tools are deployed correctly in a way that is isolated from my other Rubies? Ideally, I’d be able to point the ChefDK container to my local source files on Windows. I can still be on the network, have access to email and company chat, use my favorite text editors–but when I need chef commands, I can duck into the container context long enough to do what I need to do there and get out.

Sounds awesome!

The DockerFile

A Dockerfile is a description of an image that you wish to build. Here is a sample:

FROM ubuntu
MAINTAINER Chris McKenzie <>

RUN apt-get update
RUN apt-get install -y curl git build-essential libxml2-dev libxslt-dev wget lsb-release

# RUN curl -L | sudo bash
RUN wget
RUN dpkg -i chefdk_0.6.2-1_amd64.deb && rm chefdk_0.6.2-1_amd64.deb

RUN chef verify

RUN apt-get autoremove
RUN apt-get clean

First, you use the Dockerfile to build the image.

docker build -t chef-workstation .

The Powershell Script

To make the Docker image usable, I need to create containers from it. Containers are instances of an image that you can use. Containers are disposable. To that end let’s write some powershell to wrap up complex docker commands into something I can call easily.

  $username = $env:UserName

  function Invoke-Knife() {
    $cmd = "docker run --entrypoint=knife --rm -w='/root/src' -v /c/Users/$username/.chef:/root/.chef -v /c/Users/$username/src:/root/src -it chef-workstation $args"
    write-debug $cmd
    Invoke-Expression $cmd

  function Invoke-Chef(){
    $cmd = "docker run --entrypoint=chef  --rm -w='/root/src' -v /c/Users/$username/.chef:/root/.chef -v /c/Users/$username/src:/root/src -it chef-workstation $args"
    write-debug $cmd
    Invoke-Expression $cmd

  set-alias knife Invoke-Knife
  set-alias chef Invoke-Chef

This script defines 2 functions: Invoke-Knife and Invoke-Chef. Let’s break this command down step by step.

  • docker run
This command runs a container
  • –entrypoint=knife
Tells Docker to execute 'knife' automatically when the container is created.
  • –rm
Tells Docker to remove the container after its process stops.
  • -w=’/root/src’
Tells Docker to run 'knife' in the working directory '/root/src'
  • -v /c/Users/$username/.chef:/root/.chef
Tells Docker to share the .chef directory from the Host OS to '/root/.chef'
  • -v /c/Users/$username/src:/root/src
Tells Docker to share the src directory from the Host OS to '/root/src'
  • -it chef-workstation
Tells Docker to allocate a tty for the container process and create the container from the 'chef-workstation' image
  • $args
This is the powershell variable containing the arguments passed to `Invoke-Knife`. These arguments are simply forwarded to knife when the container is executed.

What Happens Next?

When I execute ‘knife search “:“‘, ‘knife’ is an alias that executes Invoke-Knife–which starts the container and passes ‘search “:“‘ to the knife executable in the container. As soon as the ‘knife’ process finishes its work and emits its results, the container is shut down and deleted. If I execute ‘chef generate cookbook fredbob’, a similar process happens except that the ‘chef’ executable creates a cookbook in ‘/root/src’ on the container–which is mapped to my source directory on Windows. Both executables use the chef credentials I’ve defined in my .chef directory on my Host OS.


I’m putting this out there because I’ve found it helpful, however there may be simpler, better ways of doing things. I’m open to comments and suggestions for other ways to resolve any of these issues, or for more interesting ways to use Docker.

Of Purple Squirrels – How to Work with an Agency Recruiter


We all get them, those emails from recruiters in our inbox breathlessly telling us about some new opportunity somewhere. If you’re like me, you just delete most of them unread. In my case they’re usually for other cities, or from my old hometown 3000 miles away. This tells me that the recruiter in question hasn’t read my updated profile. This is what we all hate about recruiters… until we need them. This love-hate relationship with recruiters gets even more complicated when you are a hiring manager.

I married a recruiter. My wife places Accounting & Finance professionals for a recruiting firm in downtown Seattle. It’s interesting hearing about her work. Listening to her gave me a whole new appreciation for what she’s up against.

First, all you need to enter the profession is a phone and a computer. The barrier to entry for recruiters is very low. Many recruiters begin and end their careers in a matter of months. It’s important to understand this because many of the junk mails you receive are likely from amateur recruiters who have no other tools at their disposal than an email blast and buzzword bingo. If you paint all recruiters with the same brush, you risk missing out on some great partnerships both as a candidate and as a client.

A good recruiter is more than just a resume service. To work with recruiters well, you should understand the problems they face. Pro tip–many of the problems they face are you.


For most of us in tech, hiring is an annoying chore that distracts us from our real work. If this describes you, change your attitude. Hiring is the single most important way you will impact your company’s culture. No other single decision you will make has broader reach. Make hiring the most important thing in your list of things to do.

Ask yourself, what makes hiring such a chore? For me it was the endless stream of annoyingly similar resumes and blah candidates on the phone. I decided to arm my recruiter with tools to slow down the rate at which they send me resumes. “Unless the candidate says something like x in response to question y, I don’t want to talk to them.”


Hiring Managers tend to hate talking to recruiters. Recruiters always seem to have another resume in their back pocket. Resumes are boring and don’t really tell you much about a candidate anyway (except that one I got that was a photocopy of a coffee-stained piece of paper–I learned everything I needed to know about the candidate from that one!) Recruiters aren’t any more psychic than you when it comes to resumes and phone screens… or you. The best they can do is integrate your feedback into their process moving forward. In order to do that, they have to actually get your feedback.

There are some things you can do to make sure they get the information they need.

  • Don’t hide behind HR. HR departments are great for running background checks and whatnot. But they necessarily don’t know you, your department, or your culture. Include HR on the search details, but if you have an HR department that likes to run the hiring process for you, insist on taking control yourself. At the very least you should be involved in every step of the hiring process.
  • Commit to a 24hr response time on hiring-events. This includes (but is not limited to) emails, phone screens, in-person interviews, texts, etc. If you talk to a candidate and fail to deliver feedback to the recruiter, you are telling the recruiter that your hire is not a high priority. If it’s not a high priority to you why should it be a high priority to them?
  • Arrange a weekly or even semi-weekly checkin call to see how things are going. If you’re doing Scrum you already have daily checkins. Treat hiring just like any other project. When you hire a recruiter, you are partnering with them to find a good fit for your open position. Treat them like a member of your team. Find out what’s going well and what’s not. If they need something from you to be more effective and it’s reasonable, give it to them.

Purple Squirrels

Third party recruiters often work solely on commission. They often receive searches that they call “purple squirrels.” A purple squirrel is a difficult-to-fill requirement. When you’re paid on commission, you want to work searches that are easy to accomplish. Purple Squirrels are the opposite of that. They take more work to understand the client and the culture. They take more work to identify matching candidates. Since the likelihood of finding a matching candidate is lower, purple squirrels get lower priority on your recruiter’s desk.

If you have a Purple Squirrel and you want them to treat you like a priority anyway you’ll need to sweeten the deal:

  • Tell your recruiter that you have a purple squirrel (use the term–they know it and will appreciate that you know it).
  • Commit to giving your recruiter access to the hiring manager
  • Reduce or eliminate their competition. If you trust your recruiter, give them the exclusive search. It might seem to you like you’re better off having an army of search firms trying to find your perfect candidate, but the reality is you’re simply reducing their motivation to work for you.
  • If you can’t commit to an exclusive, give your favorite recruiter a 2-4-6 week head start.


It bears repeating–giving your recruiter an exclusive on your search is the best way to get them motivated to find your purple squirrel. If you’re not comfortable with an exclusive, at least give them a good head start. I’ve told my primary recruiter that as long as I have high quality candidate flow the search is his exclusively. I’ve also been very clear that I prefer quality to quantity.

Clear Acceptance Criteria

If there are criteria that rule a candidate out for you, be up front with your recruiter about that. If you can’t clearly define who you’re looking for, how do you expect your recruiter to? Give them open-ended questions they can ask candidates before sending them to you. Tell your recruiter the kinds of answers you would like to see. Tell your recruiter what kinds of answers would mean you don’t want to talk to the candidate. I asked my recruiters to tell me what candidates found interesting or exciting about our job description. I told them what was important to me about our job description. Paying attention to clear acceptance criteria will help the recruiter filter candidates for the ones you actually want to talk to.

As a hiring manager you should have some idea of the skills and interests you are hiring for. If you tell your recruiter you want a “rock star” but don’t offer any clarification you shouldn’t be surprised if your candidate shows up to the interview both late and high. It’s fine to refine your requirements as you interview more and gain better insight into who works and who doesn’t. However, if you’re making radical shifts in your hiring requirements between candidates, your recruiter has no idea how to work for you.

When you’re hiring, don’t just think about the work you want done–think about who wants to do that work. Often it’s not somebody who’s already been doing it for years, but someone who wants to learn it. Some things you can teach and some things you can’t. Be clear-headed about what you’re willing to teach and what skills and knowledge are an actual requirement.


If you don’t trust your recruiter you should find a new recruiter. If you don’t trust any recruiter, then perhaps the problem is you, no? Some people are afraid to tell the recruiter what they’re looking for for fear that the recruiter will coach the candidate to answer certain questions in certain ways. I’m sure that there are some unscrupulous recruiters out there who would do that, but it’s short-sighted. Most contracts with recruiters are written so that if there’s a “fall-off” (i.e., the candidate leaves or is fired) in a certain time-frame, then the recruiter is on-the-hook for a free replacement. Search firms will often even assess the recruiters commission for the original placement after a fall-off. Recruiters are not financially rewarded for placing bad people in your department.

My Experience

Prior to implementing these ideas into my hiring practices I hated hiring. It was a chore. My email was always full of new resumes to review, more phone screens to schedule, and more on-sites to waste my time on. Phone screens often yielded candidates that seemed surprised by questions like “What sorts of things do you do to keep your skills up-to-date?” On-sites showed that even “experienced” candidates couldn’t solve simple algorithmic problems at the keyboard. Eventually, after enough time wasted by myself as well as my team-mates we would eventually find someone we wanted to hire. I now think of this as the “brute force search” method of hiring. As developers we know that this is inefficient.

After implementing these ideas, my inbox dried up considerably. Instead of 10-25 resumes in my inbox every week I would get 1 or 2. When I talked to the candidates on the phone I almost always wanted to bring them in for an on-site. The on-site interviews have almost all been positive. We went from a department that made offers to candidates 5% of the time to a department that makes offers 80% of the time. In short, my recruiter now does most of the initial filtering for me. Sometimes our check-in calls consist of him telling me about the candidates he chose not to submit to me. When he wants to send someone over to me, I’ve learned to trust him that I should talk to them. Because I did a good job telling my recruiter who we’re looking for, we now get candidates who want to work in an environment like ours. We have candidates self-select out because we do a good job of describing our environment up front. All-in-all, our search is now targeted which means it takes less of my time on a day-to-day basis to find quality candidates. I win.

Previous Page · Next Page