Category Archives: Practices
My Favorite Tools

In the last year I’ve discovered some tools that I’ve really enjoyed using. All of these tools are free and most of them have been used in production projects.

Here are my favorites:

Desktop

Remote Desktop Connection Manager: Provides a UI to manage your Remote Desktop connections. Especially useful when working in a shop where the server names are meaningless, though I’m sure no one does that any more. Right? Right?

Powershell: I’ve been meaning to learn Powershell for a long time and I’ve been lucky enough to land in a shop that uses it extensively. It’s well worth learning. I’m constantly looking for ways to automate repetitive tasks.

Libraries

Simple.Data.SqlServer: This micro-orm is very easy to setup and get running. It uses dynamic so any code that uses it should be encapsulated and tested heavily. It’s extremely lightweight compared to NHibernate and Entity Framework (even the Code-First variety), and the built-in mapping code handles the most common conventions.

MVVM Light: A light-weight framework for MVVM applications. The highlight of this lib for me is the Messenger class. On the down side, I wish it had an Async Messenger implementation. Sigh. Kellabyte’s article on MVVM and the ServiceLocator anti-pattern are a good companion read to using this library.

Specflow: Write test cases in English Language and run them as part of an automated build process? Check! I’ve recently used it to document expected functionality for third-party developers.

NBuilder: This tool lets you stand up test POCO’s extremely fast. It provides a fluent API for building objects and lists of objects. It auto-fills top-level property values for value types and strings.

Ninject: This is an extremely easy-to-use Dependency Injection tool. We were using Castle, but it’s hard to upgrade because NHibernate is dependent on it. Also, the Ninject API is a bit more discoverable, and it supports rebinding a dependency. This is useful in unit test scenarios in which you may want to bootstrap the system’s dependency but replace some of them with mocks.

NSubstitute: This mocking framework has a beautiful API, but I have not yet used it in a production app. I’d love to hear from someone who has! Mocking syntax for any non-trivial scenarios is usually very ugly.

Team

NuGet: NuGet is the easiest way to manage dependencies in .NET projects. It’s integrated into Visual Studio 2010.

NuGet TeamCity Plugin: With this plugin you can stop storing binaries in source control You can install packages as part of your TeamCity build, and create and publish packages of your own.

NuGet Gallery: With a little setup effort you can host your own private NuGet feed behind your corporate firewall. Turn your own internal libraries into NuGet packages for easy dependency management!

Confluence: This wiki tool is the easiest to manage I’ve ever seen. It has a great WYSIWYG editor for text and table content, can render Gliffy diagrams, UI mockups, and workflows. This is not a free tool.

Installing all NuGet Packages in the Solution

The benefits of using NuGet to manage your project dependencies should be well-known by now.

If not, read this.

There’s one area where NuGet is still somewhat deficient: Team Workflow.

Ideally, when you begin work on a new solution, you should follow these steps:

1) Get the latest version of the source code.

2) Run a command to install any dependent packages.

3) Build!

It’s step 2 in this process that is causing some trouble. NuGet does offer a command to install packages for a single project, but the developer is required to run this command for each project that has package references. It would be nice if NuGet would install all packages for all projects using packages/repositories.config, but at the time of this writing it will not. However, it is fairly easy to add a powershell function to the NuGet package manager console that will do this for you.

First, you should download the NuGet executable and add its directory to your PATH Environment variable. I placed mine in C:devutilsNuGet<version>. You’ll need to be able to access the executable from the command-line.

Second, you need to identify the expected location of the NuGet Powershell profile. You can do this be launching Visual Studio, opening the Package Manager Console, type $profile, then press enter. The console will show you the expected profile path. In Windows 7 it will probably be: “C:Users<user>DocumentsWindowsPowerShellNuGet_profile.ps1”

Next you need to create a file with that name in that directory.

Next, paste the following powershell code into the file:

function Install-NuGetPackagesForSolution()
{
    write-host "Installing packages for the following projects:"
    $projects = get-project -all
    $projects
   
    foreach($project in $projects)
    {
        Install-NuGetPackagesForProject($project)
    }

}

function Install-NuGetPackagesForProject($project)
{
    if ($project -eq $null)
    {
        write-host "Project is required."
        return
    }

    $projectName = $project.ProjectName;
    write-host "Checking packages for project $projectName"
    write-host ""
   
    $projectPath = $project.FullName
    $projectPath
   
    $dir = [System.IO.Path]::GetDirectoryName($projectPath)
    $packagesConfigFileName = [System.IO.Path]::Combine($dir, "packages.config")
    $hasPackagesConfig = [System.IO.File]::Exists($packagesConfigFileName)
   
    if ($hasPackagesConfig -eq $true)
    {
        write-host "Installing packages for $projectName using $packagesConfigFileName."
        nuget i $packagesConfigFileName -o ./packages
    }

}

Restart Visual Studio 2010. After the package manager console loads, you should be able to run the Install-NuGetPackagesForSolution command. The command will iterate over each of your projects, check to see if the project contains a packages.config file, and if so run NuGet install against the packages.config.

You may also wish to do this from the PowerShell console outside of visual studio. If you are in the solution root directoy you can run the following two commands:

$files = gci -Filter packages.config -Recurse
$files | ForEach-Object {nuget i $_.FullName -o .packages}

Refactoring Q & A: When Should I Refactor?

Answer: When you have to spend too long figuring out what the code does.

Consider the following partial implementation of a ShoppingCart class.

public class ShoppingCart
{
    public ShoppingCart()
    {
        this._readOnlyDetails = new ReadOnlyCollection<OrderDetail>(_details);
    }

    private readonly List<OrderDetail> _details = new List<OrderDetail>();
    private readonly ReadOnlyCollection<OrderDetail> _readOnlyDetails;
    public ReadOnlyCollection<OrderDetail> Details
    {
        get
        {
            return _readOnlyDetails;
        }
    }

Our ShoppingCart class is simply hiding the internal storage mechanism of Details so that clients of ShoppingCart cannot add or remove products directly. For this reason, ShoppingCart must provide its own methods for modifying the OrderDetails collection. In this post I implemented this method and applied some heavy refactoring. Don’t read the implementation just yet.

Consider the requirements: When adding a product to the shopping cart, I must provide an OrderDetail that indicates the quantity of that product that has been ordered. Subsequent additions of the same product to the shopping cart should not result in additional order details. Rather, the existing OrderDetail should reflect the sum of all quantities of the product being ordered.

Imagine that you open the source file to see how this is done, but you have all the outlining collapsed so that all you see is method signatures. Imagine that these are the method signatures you see:

public void AddProduct(Product product, int quantity)
private OrderDetail FindOrCreateDetailForProduct(Product product)
private OrderDetail CreateDetailForProduct(Product product)
private OrderDetail FindDetailByProductSku(string productSku)
private bool HasDetailForProductSku(string productSku)

Just read each one of these method signatures. Each method states clearly exactly what it is going to do. Now for each of the method signatures, imagine that you were going to expand the outlining so you can see the method body. What code would you expect to find in each one. Seriously, pause for a moment and think about what you would expect to find in each method body. Got it? Okay, now read the actual implementations of these methods.

public void AddProduct(Product product, int quantity)
{
    var detail = FindOrCreateDetailForProduct(product);
    detail.Quantity += quantity;
}

private OrderDetail FindOrCreateDetailForProduct(Product product)
{
    var detail = HasDetailForProductSku(product.Sku) ?
        FindDetailByProductSku(product.Sku) :
        CreateDetailForProduct(product);
    return detail;
}

private OrderDetail CreateDetailForProduct(Product product)
{
    var detail = new OrderDetail() { Product = product };
    this._details.Add(detail);
    return detail;
}

private OrderDetail FindDetailByProductSku(string productSku)
{
    var detail = this.Details.Single(d => d.Product.Sku == productSku);
    return detail;
}

private bool HasDetailForProductSku(string productSku)
{
    return this.Details.Any(d => d.Product.Sku == productSku);
}

I’m sure that some of the implementations are different than you imagined, but are they different in any substantial way? Probably not. A method called HasDetailForProductSku can have only so many syntactic implementations, all of which are semantically identical. The method names and signatures state what our code is doing in English-language terms that are meaningful to a person. The bodies of these methods provide the syntax that is meaningful to the compiler. Class, method, parameter, and variable names are targeted at people. Their purpose is to express the intent of the code to people. Conditionals, loops, and other statements are the basic forms in which we code, but their target audience is the compiler. To put this in the form of a general principle: “Refactor when the semantic meaning of the code is hidden or obscured by the syntax required by the compiler,” or “Refactor when you have to spend too long figuring out what the code does.”

TDD Talking Point: TDD is more about API than Implementation

One of the points I tried to make in my talk about TDD yesterday is that TDD is more focused on the clarity and expressiveness of your code than on its actual implementation. I wanted to take a little time and expand on what I meant.

I used a Shopping Cart as an TDD sample. In the sample, the requirement is that as products are added to the shopping cart, the cart should contain a list or OrderDetails that are distinct by product sku. Here is the test I wrote for this case (this is commit #8 if you want to follow along):

[Test]
public void Details_AfterAddingSameProductTwice_ShouldDefragDetails()
{
    // Arrange: Declare any variables or set up any conditions
    //          required by your test.
    var cart = new Lib.ShoppingCart();
    var product = new Product() { Sku = "ABC", Description = "Test", Price = 1.99 };
    const int firstQuantity = 5;
    const int secondQuantity = 3;

    // Act:     Perform the activity under test.
    cart.AddToCart(product, firstQuantity);
    cart.AddToCart(product, secondQuantity);

    // Assert:  Verify that the activity under test had the
    //          expected results
    Assert.That(cart.Details.Count, Is.EqualTo(1));
    var detail = cart.Details.Single();
    var expectedQuantity = firstQuantity + secondQuantity;
    Assert.That(detail.Quantity, Is.EqualTo(expectedQuantity));
    Assert.That(detail.Product, Is.SameAs(product));
}

The naive implementation of AddToCart is currently as follows:

public void AddToCart(Product product, int quantity)
{
    this._details.Add(new OrderDetail()
                          {
                              Product = product, 
                              Quantity = quantity
                          });
}

 

This implementation of AddToCart fails the test case since it does not account for adding the same product sku twice. In order to get to the “Green” step, I made these changes:

public void AddToCart(Product product, int quantity)
{
    if (this.Details.Any(detail => detail.Product.Sku == product.Sku))
    {
        this.Details.First(detail => detail.Product.Sku == product.Sku).Quantity += quantity;           
    }
    else
    {
        this._details.Add(new OrderDetail()
        {
            Product = product,
            Quantity = quantity
        });                
    }

}

At this point, the test passes, but I think the above implementation is kind of ugly. Having the code in this kind of ugly state is still a value though because now I know I have solved the problem correctly. Let’s start by using Extract Condition on the conditional expression.

public void AddToCart(Product product, int quantity)
{
    var detail = this.Details.SingleOrDefault(d => d.Product.Sku == product.Sku);
    if (detail != null)
    {
        detail.Quantity += quantity;           
    }
    else
    {
        this._details.Add(new OrderDetail()
        {
            Product = product,
            Quantity = quantity
        });                
    }
}

The algorithm being used is becoming clearer.

  1. Determine if I have an OrderDetail matching the Product Sku.
  2. If I do, increment the quantity.
  3. If I do not, create a new OrderDetail matching the product sku and set it’s quantity.

It’s a pretty simple algorithm. Let’s do a little more refactoring. Let’s apply Extract Method to the lambda expression.

public void AddToCart(Product product, int quantity)
{
    var detail = GetProductDetail(product);
    if (detail != null)
    {
        detail.Quantity += quantity;           
    }
    else
    {
        this._details.Add(new OrderDetail()
        {
            Product = product,
            Quantity = quantity
        });                
    }
}

private OrderDetail GetProductDetail(Product product)
{
    return this.Details.SingleOrDefault(d => d.Product.Sku == product.Sku);
}

 

This reads still more clearly. This is also where I stopped in my talk. Note that it has not been necessary to make changes to the my test case because the changes I have made go to the private implementation of the class. I’d like to go a little further now and say that if I change the algorithm I can actually make this code even clearer. What if the algorithm was changed to:

  1. Find or Create an OrderDetail matching the product sku.
  2. Update the quantity.

In the first algorithm, I am taking different action with the quantity depending on whether or not the detail exists. In the new algorithm, I’m demoting the importance of whether the order detail already exists so that I can always take the same action with respect to the quantity. Here’s the naive implementation:

public void AddToCart(Product product, int quantity)
{
    OrderDetail detail;
    
    if (this.Details.Any(d => d.Product.Sku == product.Sku))
    {
        detail = this.Details.Single(d => d.Product.Sku == product.Sku); 
    }
    else
    {
        detail = new OrderDetail() { Product = product };
        this._details.Add(detail);
    }
    
    detail.Quantity += quantity;           
}

The naive implementation is a little clearer. Let’s apply some refactoring effort and see what happens.. Let’s apply Extract Method to the entire process of getting the order detail.

public void AddToCart(Product product, int quantity)
{
    var detail = GetDetail(product);

    detail.Quantity += quantity;
}

private OrderDetail GetDetail(Product product)
{
    OrderDetail detail;
    
    if (this.Details.Any(d => d.Product.Sku == product.Sku))
    {
        detail = this.Details.Single(d => d.Product.Sku == product.Sku); 
    }
    else
    {
        detail = new OrderDetail() { Product = product };
        this._details.Add(detail);
    }
    return detail;
}

This is starting to take shape. However, “GetDetail” does not really communicate that we may be creating a new detail instead of just returning an existing one. If we rename it to FindOrCreateOrderDetailForProduct, we may get that clarity.

public void AddToCart(Product product, int quantity)
{
    var detail = FindOrCreateDetailForProduct(product);

    detail.Quantity += quantity;
}

private OrderDetail FindOrCreateDetailForProduct(Product product)
{
    OrderDetail detail;
    
    if (this.Details.Any(d => d.Product.Sku == product.Sku))
    {
        detail = this.Details.Single(d => d.Product.Sku == product.Sku); 
    }
    else
    {
        detail = new OrderDetail() { Product = product };
        this._details.Add(detail);
    }
    return detail;
}

AddToCart() looks pretty good now. It’s easy to read, and each line communicates the intent of our code clearly. FindOrCreateDetailForProduct() on the other hand is less easy to read. I’m going to apply Extract Conditional to the if statement, and Extract Method to each side of the expression. Here is the result:

private OrderDetail FindOrCreateDetailForProduct(Product product)
{
    var detail = HasProductDetail(product) ? 
        FindDetailForProduct(product) : 
        CreateDetailForProduct(product);
    return detail;
}

private OrderDetail CreateDetailForProduct(Product product)
{
    var detail = new OrderDetail() { Product = product };
    this._details.Add(detail);
    return detail;
}

private OrderDetail FindDetailForProduct(Product product)
{
    var detail = this.Details.Single(d => d.Product.Sku == product.Sku);
    return detail;
}

private bool HasProductDetail(Product product)
{
    return this.Details.Any(d => d.Product.Sku == product.Sku);
}

Now I’ve noticed that HasProductDetail and FindDetailForProduct are only using the product sku. I’m going to change the signature of these methods to accept only the sku, and I’ll change the method names accordingly.

public void AddToCart(Product product, int quantity)
{
    var detail = FindOrCreateDetailForProduct(product);
    detail.Quantity += quantity;
}

private OrderDetail FindOrCreateDetailForProduct(Product product)
{
    var detail = HasDetailForProductSku(product.Sku) ? 
        FindDetailByProductSku(product.Sku) : 
        CreateDetailForProduct(product);
    return detail;
}

private OrderDetail CreateDetailForProduct(Product product)
{
    var detail = new OrderDetail() { Product = product };
    this._details.Add(detail);
    return detail;
}

private OrderDetail FindDetailByProductSku(string productSku)
{
    var detail = this.Details.Single(d => d.Product.Sku == productSku);
    return detail;
}

private bool HasDetailForProductSku(string productSku)
{
    return this.Details.Any(d => d.Product.Sku == productSku);
}

At this point, the AddToCart() method has gone through some pretty extensive refactoring. The basic algorithm has been changed, and the implementation of the new algorithm has been changed a lot. Now let me point something out: At no time during any of these changes did our test fail, and at no time during these changes did our test fail to express the intended behavior of the class. We made changes to every aspect of the implementation: We changed the order of the steps in the algorithm. We constantly added and renamed methods until we had very discrete well-named functions that stated explicitly what the code is doing. The unit test remained a valid expression of intended behavior despite all of these changes. This is what it means to say that a test is more about API than implementation. The unit-test should not depend on the implementation, nor does it necessarily imply a particular implementation.

Happy Coding!

TDD Presentation Resources

Tomorrow I will be giving a talk on TDD at CMAP. The demo code and outline I will be using can be found on bitbucket here.

Here is the outline for the talk:

  • I. Tools
    • A. Framework
    • B. Test Runner
    • C. Brains
  • II. Test Architecture
    • A. Test Fixture
    • B. Setup
    • C. Test Method
    • D. TearDown
  • III. Process
    • A. Red
    • B. Green.
    • C. Refactor.
    • D. Rinse and Repeat.
  • IV. Conventions
    • A. At least one testfixture per class.
    • B. At least one test method per public method.
    • C. Test Method naming conventions
      • i. MethodUnderTest_ConditionUnderTest_ExpectedResult
    • D. Test Method section conventions
      • i. Arrange
      • ii. Act
      • iii. Assert
  • V. Other Issues
    • A. Productivity Study
    • B. Testing the UI
      • i. Not technically possible without more tooling/infrastructure
      • ii. MVC patterns increate unit-test coverage.
      • iii. Legacy code.
        • a. Presents special problems.
        • b. Touching untested legacy code is dangerous.
        • c. Boy-Scout rule.
        • d. Use your own judgment
    • C. Pros and Cons
      • i. Pros
        • a. Quality.
        • b. Encourage a more loosely-coupled design.
        • c. Document the work that is done.
        • d. Regression testing.
        • e. Increased confidence in working code means changes are easier to make.
        • f. Encourages devs to think about code in terms of API instead of implementation.
          • 1. Makes code more readable.
          • 2. Readable code communicates intent more clearly.
          • 3. Readable code reduces the need for additional non-code documentation.
      • ii. Cons
        • a. Takes longer to develop.
        • b. Test code must be maintained as well.
        • c. Requires that devs adapt to new ways of thinking about code.
    • D. Notes
CMAP Presentation

My “Introduction to Test Driven Development” talk was accepted for the Fall 2010 CMAP! From the talk description:

This talk will demonstrate the basics of writing unit-tests, as well as how Test Driven Development solves many common software development problems. We will cover some of the research that has been done on the effectiveness of TDD. We will also deal some of the more common questions and concerns surrounding TDD such as productivity, testing the UI, and testing legacy code.

What Happens to Software Under Arbitrary Deadlines?

There are four basic engineering aspects to developing a software system: Requirements, Test, Design, and Implementation. When a project is squeezed for time, it is the Design and Test aspects that get squeezed to make room for Implementation. Design activities are in effect frozen, which means that the design of the system will stay static even as new features are added. Testing simply stops, with all that that implies.

As implementation continues, new requirements are uncovered or areas of existing requirements are discovered to need modification or clarification. All of these factors imply necessary changes to the design of the system, but since design is frozen, the necessary changes don’t happen. Since testing isn’t happening, the full ramifications of this fact are not immediately felt.

The only remaining implementation choice is to squeeze square pegs of new features into the round holes of the existing design. This option requires more time to implement because time is spent creating overhead to deal with the dissonance between the design and the requirements. The resulting code is harder to understand because of the extra overhead. This option is more prone to error because more code exists than should be necessary to accomplish the task. Every additional line of code is a new opportunity for a bug.

All of these factors results in code that is harder to maintain over time. New features are difficult to implement because they will require the same kind of glue-code to marry a poor design to the new requirements. Further, each new feature deepens the dependency of the system on the poor design, making it harder to correct, and making it ever-easier to continue throwing bad code after bad. When considered in isolation, it will always seem cheaper to just add crap-code to get the new feature in rather than correct the design, but cumulatively, it costs much more. Eventually it will kill the project.

As bad as all this is, the problems don’t stop there. As long as the ill-designed code continues to exist in the system it serves to undermine the existing and all future features in two ways. 1) It becomes a pain point around which ever more glue-code will have to be written as the interaction of the ill-designed feature with the rest of the system changes. 2) it acts as a precedent in the code-base, demonstrating that low-quality code is acceptable so long as the developer can find a reason to rationalize it.

Previous Page