Agile Antipattern: Myopic Planning

I recently came across this tweet:

Read the entire article. It’s fascinating. Here are some choice quotes:

work gets atomized into “user stories” and “iterations” that often strip a sense of accomplishment
from the work, as well as any hope of setting a long-term vision for where things are going.

Instead of working on actual, long-term projects that a person could get excited about, they’re
relegated to working on atomized, feature-level “user stories” and often disallowed to work on
improvements that can’t be related to short-term, immediate business needs (often delivered from
on-high).

the sorts of projects that programmers want to take on, once they master the basics of the craft,
are often ignored, because it’s annoying to justify them in terms of short-term business value.
Technical excellence matters, but it’s painful to try to sell people on this fact if they want,
badly, not to be convinced of it.

Under Agile, technical debt piles up and is not addressed because the business people calling the
shots will not see a problem until it’s far too late or, at least, too expensive to fix it.
Moreover, individual engineers are rewarded or punished solely based on the optics of the current
two-week “sprint”, meaning that no one looks out five “sprints” ahead. Agile is just one mindless,
near-sighted “sprint” after another: no progress, no improvement, just ticket after ticket after ticket.

I can summarize my own views on Scrum by saying “Scrum is a process that teaches a team how not to need Scrum.” I’ll likely expand on that idea later, but it is not the focus of this post.

Short Term Planning

A common theme running through many of Church’s criticisms is the failure to plan long-term. Larger engineering tasks do not get done because they cannot fit into an arbitrarily short time-frame. Technical debt gets ignored. Software rots and it is regarded as normal.

Church places the blame on Scrum and appears to argue that engineers should be in charge of the project planning so that they can take on the larger, longer-term, more-expensive iniatives that will ultimately save the company money.

Checks & Balances

We’ve tried that. That was the default state of the industry since its inception. Software engineering was akin to black-magic to most people, but for others it’s just grunt-work. (I once had a supervisor say to me in all seriousness “I had a class in C in college. A loop is a loop right? Anybody can do this.”) Software engineers would spend their time building systems and frameworks for features that were ultimately not needed. (They still do. Watch this talk by Christin Gorman for a real-world example in a modern project.) This tendency of engineers to “gold-plate” their code or work on “what’s cool” resulted in huge expense-overruns for software that was delivered late, over-budget, and often did not do what was expected of it.

In an attempt to get some kind of control over engineering projects The Business started taking more direct control over software projects. Since they could not trust the engineers to stay focused on delivering business value, they ruthlessly began cutting anything that did not directly contribute to features they could see. This was a problem because they did not often understand the tradeoffs they were making which resulted in software that did what it was supposed to in v1, but became harder and harder to maintain over time due to poor engineering.

Agile/Scrum attempts to address this problem by creating a clear separation of responsibilities. The Business is responsible for feature definition and prioritization. Development is responsible for estimation and implementation. The Business decides what is built and when but Development is in control of how. Instead of trying to teach The Business software engineering, Development communicates about alternatives in terms of estimates and risks.

Development: If you choose strategy ‘A’, we estimate it will take this long with these risks.
If you choose strategy ‘B’, it will take longer, but have fewer risks.

Committment

In order for this division of labor to be successful, each party must commit to it. Introducing a Scrum (or any other) process to a team does not make the team agile. Just because the engineers practice TDD and practices Continuous Integration, Continuous Deployment and other good engineering practices does not mean the team is agile.

The Business is part of the team. It is the Business’ responsibility to do the long-term planning. It is the engineer’s responsibility to communicate about estimates and risk. If either party fails to do their part, it is not a failure of “Agile,” but a failure of the people involved.

You might be tempted to accuse me of the “No True Scotsman” fallacy at this point. If so, then you are missing my point. The Agile Software movement is focused on a philsophy for interacting with The Business which values the contributions of all stakeholders and encourages trust. It is not a prescription for particular processes. No process will succeed if the stakeholders do not buy into the underlying philosophy.

Agile is about the people. It’s right there in the manifesto.

Individuals and interactions over processes and tools

An observant critic might respond at this point by pointing to another pieces of the manifesto:

Responding to change over following a plan

Isn’t this an endorsement of short-term thinking? No. At the time the manifesto was written, it was much more likely that software would be written in months or years-long iterations. This created a problem in that business circumstances would change in ways that diminished the value of planned features before they were delivered. It is not an injunction against planning per se. I call your attention to the last sentence in the manifesto:

That is, while there is value in the items on the right, we value the items on the left more.

To adapt a popular adage, “Plan long-term, work short-term.”

Communication

I don’t know many software engineers who entered the profession because they wanted to work with people. However, working with people is an absolute must in just about every profession. If we want the business to think long-term and make solid engineering choices we must learn to communicate with them.

The Responsibility of Communication is bi-directional. We must learn to communicate about estimates and risks. The Business must learn to consider risks as well as the cost. The Business should learn engineering concepts at a high-level (e.g., we can scale better if we use messaging).

It’s my observation that there often is a long-term plan but it is not necessarily communicated. As engineers, we must endeavor to find out what it is. We must be honest when we think we are being pushed to make an engineering mistake. If we feel strongly about an option, we must become salesmen and sell our perspective.

How do I Communicate with The Business?

I once had a product-owner come to me and ask for a feature. My team estimated 2 weeks to implement and test the new feature. He needed the feature in 3 days. He had some technical knowledge and offered a solution that might get the feature done faster. His solution would work, but it involved a lot of bad practices. We reached an agreement in which we would deliver the solution in 3 days using the hacky solution, but our next project would be to implement the feature correctly.

We were able to have this conversation because my team and I had a track record of delivering what was asked for in a working, bug-free state on a consistent basis. In other words, we had trust. It took some time to build that relationship of trust, but not as much as you might think. When I’m communicating with The Business about engineering alternatives, I make sure I answer 3 questions.

  1. What problem are we trying to solve?
  2. How long will each alternative take to deliver?
  3. What are the risks associated with each alternative?

I make sure that The Business and I are in agreement on the answers to all 3 of these questions. Then I accept their decision.

What if They are Still Myopic?

If you’re sure you’ve done everything you can to communicate clearly about short vs. long-term options and risks and if The Business insists on always taking the short-term high-risk solution, then you might be forced to conclude that you don’t like working for that particular business. It might be time to move on.

I hate saying it, but most of the companies I worked for in the early part of my career are places I would not go back to. I started my career in Greenville, SC. The first company I truly enjoyed working for was in Washington DC. I ended up in Seattle, WA where I found a company that does embrace most of my values. Do we have problems? Absolutely. However, with reason and communication as our tools, we are addressing them.

Simple Programmer Blogging Course

I’ve had a blog for years but my blogging frequencey is intermittent. I wanted to see what a successful blogger would say, so I took John Sonmez’ Simple Programmer Blogging Course. I can’t say that there were any lightning bolt insights. If I had to boil the course down in four
words it would be “get off your ass,” along with some helpful tricks to overcome common roadblocks to getting that next blog written.

Two of my biggest bottlenecks have been trying to figure out what to write about, and perfectionism. “Did I cover everything relevant to [topic]? Did I correctly render the forest and the trees? Am I certain about the advice I gave?” Sonmez’ course gave me techniques to deal with these bottlenecks for which I’m grateful.

If you are at all interested in blogging but a) are not sure how to get started, or b) have felt stalled, I would recommend that you take the course. You’ve got nothing to lose–it’s free!

The Normalization of Deviance

The Normalization of Deviance is a concept that

describes a situation in which an unacceptable practice has gone on for so long without 
a serious problem or disaster that this deviant practice actually becomes the 
accepted way of doing things. 

credit: challenger explosion

You’re familiar with this concept already. You’ve encountered everytime you’ve seen someone doing they know to be wrong but justifying it with “we’ve never had a problem before.” This is the person who consistently buys things s/he can’t afford with credit cards. This is the person goes out drinking and drives home. This is the software engineer who writes code without tests. This is the business that ignores through continual deferment technical debt issues raised by its engineers.

As software engineers, we know we should remove dead code in projects. We know that we should automate software deployment. We know we should provide reliable automated tests for our features. We know we should build and test our software on a machine other than our personal dev box. We know that we should test our software in a prod-like environment that is not prod.

Do you do these things in your daily work? Does your organization support your efforts?

How do you identify the Normalization of Deviance?

One of the challenges you will face identifying Normalization of Deviance is the fact that things you do on a daily basis are… normal.

There are several “smells” that could indicate that your organization is having problems.

  • You have “official” policies that do not describe how you actually do work.
  • You have automation that routinely fails and requires handholding to reach success.
  • You have unit or integration tests that fail constantly or intermittently such that your team
    does not regard their failure as a “real” problem.
  • You have difficult personalities in key positions who turn conversations about their effectiveness
    into conversations about your communication style.

All of these issues are “of a kind,” meaning that they are all examples of routinely accepted failure. This is obviously not an exhaustive list.

Why is it a problem?

You will eventually have a catastrophic failure. Catastrophic failures seldom occur in a vacuum. Usually there are a host of seemingly unrelated smaller problems that part of daily life. Catastrophic failure usually occur when the stars align and the smaller issues coalesce in such a way that some threat vector is allowed to completely wreck a process. This is known as the Swiss Cheese Model of failure. I first learned about the Swiss Cheese Model in a book called The Digital Doctor, which is a holistic view of the positive and negative effects of software in the medical world. This book is well-worth reading. The section on the deadly consequences of “alert fatigue” would be of special interest to software engineers and UX designers.

A Real World Example

A fascinating case of software failure that destroyed a company overnight is the story of Knight Capital. In a writeup, Doug Seven lays the blame at the lack of an automted deployment system. I agree, though I think the problems started much, much earlier. Doug writes:

The update to SMARS was intended to replace old, unused code referred to as “Power Peg” – 
functionality that Knight hadn’t used in 8-years (why code that had been dead for 
8-years was still present in the code base is a mystery, but that’s not the point). 

It’s my point. The Knight Capital failure began 8 years before when dead code was left in the system. I’ve had conversations with The Business where I’ve tried to justify removing dead code. It’s hard to make them understand what the danger is. “It’s not hurting anything is it? It hasn’t been a problem so far has it?” No, but it will be.

The second nail in Knight Capital’s coffin was that they chose to repurpose an old flag that had been used to activate the old functionality. As Doug Seven writes:

The code was thoroughly tested and proven to work correctly and reliably.
What could possibly go wrong?

Indeed.

The final nail is that Knight Capital used a manual deployment process. They were to deploy the new software to 8 servers. The deployment technician missed one. I don’t actually know this, but I can just imagine the technician staying after-hours to do the upgrade and wanting nothing more than to go home to his/her family or happy-hour or something.

At 9:30 AM Eastern Time on August 1, 2012 the markets opened and Knight began processing 
orders from broker-dealers on behalf of their customers for the new Retail Liquidity 
Program. The seven (7) servers that had the correct SMARS deployment began processing 
these orders correctly. Orders sent to the eighth server triggered the supposable 
repurposed flag and brought back from the dead the old Power Peg code.

There were more issues during their attempt to fix the problem, but none of it would have happened except that these 3 seemingly minor problems coalesced into a perfect storm of failure. The end result?

Knight Capital Group realized a $460 million loss in 45-minutes... Knight only 
has $365 million in cash and equivalents. In 45-minutes Knight went from being the 
largest trader in US equities and a major market maker in the NYSE and NASDAQ to bankrupt.

How do you fix it?

I’m still figuring that out. Luckly, Redacted Inc doesn’t have too many of these sorts of problems so my opportunities for practice are few, but here are my thoughts so far.

The biggest challenge in these scenarios is that people are so used to accepting annoying or unreliable processes as normal that they cease to see them as daily failure. It’s not until after disaster has struck that it’s clear that accepted processes were in fact failures. Nobody at Knight Capital was thinking “jeez, that dead code is really hurting us.”

There’s an old management adage: “You can’t manage what you can’t measure.” You can start addressing NOD issues by identifying risky patterns and practices that your organization uses in its daily standard operating procedure. If you can find a way, assign a cost to them. Consider ways in which these normal failures could align to cause catastrophy. If you have a sympathetic ear in management, start talking to them about this. Introduce your manager to these concepts. Tell them about Knight Capital. Your goal is to get management and The Business to see the failures for what they are. By measuring the risks and costs to your organization of acceptable failure you will have an easier time getting your voice heard.

Most importantly, come up with a plan to address the issues. It’s not enough to say “this is a problem.” You need to say “This is a problem and here are some solutions.” Go further still, show how you get your team from “here” to “there.” Try to design solutions that make the day to day work easier, not harder. Jeff Atwood calls this the Pit of Success. His blog scopes this concept to code, but it applies to processes as well. You want your team to “fall into” the right way to do things.

Another potential source of positive feed back are new members to your team. It may be hard to get them to open up for fear of crossing the wrong person, but they are new to your organization and they will see more clearly the things that look like failures waiting to happen. Nothing you are doing is normal to them yet.

Dependency Injection Patterns

Motivation

I’ve seen several different approaches to Dependency Injection, each of which have their own strengths and weaknesses. I run an internship program in which I teach these patterns to college students. I believe each pattern and anti-pattern has its pros and cons, but my observation is that even experienced devs haven’t fully articulated the cost-benefit of each approach to themselves. This is my attempt to do that.

Property Injection

“Property Injection” or “Setter Injection” refers to the process of assigning dependencies to an object through a Property or Setter method.

Example

<br />public class Widget
{
    public Bar Bar {get; set; }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

There are pros and cons to this approach.

Pros

  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified if you are looking at the class.

Cons

  • Temporal Dependency

What is a temporal dependency? Perhaps it’s best to illustrate with an example. Here is my first attempt at using the class as defined above.

    var widget = new Widget();
    widget.Foo("Hello World!");

Did you spot the problem? I never set Bar. When I attempt to call Foo I’ll get the oh-so-helpful NullReferenceException. I find out after I try to use the class that I have a missing dependency. I have to open the class to find out which dependency is missing. Then I have to modify my code as follows:

var widget = new Widget();
widget.Bar = new Bar();
widget.Foo("Hello World!");

It’s called a temporal dependency because I have to set it before I can call any methods on the class. What’s worse, the API of the class doesn’t give me any indication that anything is wrong until after I attempt to run it.

Method Injection

“Method Injection” refers to passing dependencies to the method that uses them.

Example

<br />public class Widget
{
    public void Foo(Bar bar, string someValue)
    {
        // snipped
    }
}

Pros

  • No temporal dependencies
  • dependencies are clearly communicated via the API
  • Easily replace dependencies with fakes for testing purposes

Cons

  • Method signature explosion
  • Method signature fragility
  • Clients have to concern themselves with the classes dependencies

What are method signature explosion and fragility? Method Signature Explosion means that arguments to my method signatures will increase as dependencies change. This leads to Method Signature Fragility which means that as dependencies change, clients of the method have to change as well. In other words, we lose the benefit of encapsulated logic.

Constructor Injection

Constructor Injection is the process of making dependencies available to a class through its constructor.

Example

<br />public class Widget
{
    private Bar _bar;

    public Widget(Bar bar)
    {
        _bar = bar;
    }

    public void Foo(string someValue)
    {
        _bar.SomeMethod(someValue);
    }
}

Pros

  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified through the API
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility

Cons

  • none – other than those inherent to the nature of using Dependency Injection in the first place.

Of the three approaches listed so far, I strongly prefer Constructor Injection. I see nothing but benefits in this approach.

Lazy Injection

If you’re getting started with Dependency Injection, I strongly recommend researching a Dependency Injection Framework such as Ninject to make constructing your class hierarchies easy. If you’re not ready to bite that off you might consider using Lazy Injection. This is a technique by which your constructor arguments are given default values so that you can instantiate your class with all of your default system values at run-time, but pass fakes during test-time.

Example

<br />public class Widget
{
    private Bar _bar;

    public Widget(Bar bar = new Bar())
    {
        _bar = bar;
    }

    public void Foo(string someValue)
    {
        _bar.SomeMethod(someValue);
    }
}

You can do this with Property Injection as well, mitigating some of the cons of that approach. You are still left opening the class to figure out how and what to fake however.

Example

<br />public class Widget
{
    public Bar Bar {get; set; }

    public class Widget()
    {
        Bar = new Bar();
    }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

Service Locator

Service Locator is widely considered to be an anti-pattern. To understand why, read “ServiceLocator is an Anti-Pattern“.
Service Locator involves making an open-ended registry of dependencies widely available to any class that wants them.

Example

<br />public class Widget
{
    public Bar Bar {get; private set; }

    public class Widget()
    {
        Bar = ServiceLocator.Get<Bar>();
    }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

On the surface this looks awesome.

Pros

  • Keeps method signatures tidy.
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility

Cons

  • Dependencies are not clearly specified through the API
  • Because my API doesn’t communicate my dependencies, I have to understand the classes’ implementation details to properly test it.
  • It encourages dependency explosion inside the class. This is another way of saying that a class with too many constructor arguments is a “smell” and I lose the benefit of being confronted with that “smell” if I use ServiceLocator.

Despite these flaws, it is sometimes useful as a scaffolding mechanism to introduce a Dependency Injection Framework into an application that was not designed with Dependency Injection in mind. I want to stress that I believe this pattern should only be used as an interim step on the road to full Dependency Injection Framework support in legacy applications. You should make it a point to remove ServiceLocator as quickly as possible after introducing it.

Closing Thoughts

These patterns are by no means exhaustive, but they are the common one’s I’ve seen over the years.

If you are using a Dependency Injection Framework (my favorite is Ninject), some of the cons of these approaches may be mitigated by the Framework. This may change the equation with respect to which method is appropriate for your use-case.

Odin-Commands 0.2.1 Released: Before & After Execute Hooks

Today I released Odin-Commands 0.2.1 on nuget.org.

What’s New?

I was writing a CLI command when I released it would be nice to be able to set default values for the Common Parameters on the command prior to executing the action.
The difficulty is that some of the default parameter values are composed from other parameter values but all of them are settable by the user.
To achieve this goal I added overridable OnBeforeExecute and OnAfterExecute methods to the Command class.

How do I use it?

<br />public class MyCommand: Command
{

  protected override void OnBeforeExecute(MethodInvocation invocation)
  {
     ApplyDefaultValues(); // or do other stuff prior to executing the invocation.
  }

  protected override int OnAfterExecute(MethodInvocation invocation, int result)
  {
    // you can return a different exit code if you need to.
    return base.OnAfterExecute(invocation, result);
  }

}

Odin 0.2 Released With a New Feature: Common Parameters

What are Common Parameters?

Common parameters are parameters that are repeated across multiple actions in a CLI context. For example, I might have a CLI that has takes a --verbose flag to switch on verbose output for all of my actions. Rather than require that the CLI developer add a bool verbose switch to every action in his/her Command implementation, Odin allows you declare a property on the Command implementation as a [Parameter].

Example

[Description("Provides search capabilities for books.")]
 public class SearchCommand : Command
 {
     [Parameter]
     [Alias("s")]
     [Description("The order to sort the results of the search.")]
     public SortBooksBy SortBy { get; set; }

     [Action]
     [Description("Searches books by author.")]
     public void Author(
         [Alias("a")]
         [Description("The author of the book being searched for.")]
         string author)
     {
         Logger.Info("Find books by author '{0}'; sort results by '{1}'.\n", author, SortBy);
     }

     [Action(IsDefault = true)]
     [Description("Searches books by title.")]
     public void Title(
         [Alias("t")]
         [Description("The title of the book being searched for.")]
         string title)
     {
         Logger.Info("Find books by title '{0}'; sort results by '{1}'.\n", title, SortBy);
     }

     [Action]
     [Description("Searches books by release date.")]
     public void ReleaseDate(
         [Alias("f")]
         [Description("The release date from which to search.")]
         DateTime? from = null,

         [Alias("t")]
         [Description("The release date to which to search.")]
         DateTime? to = null)
     {
         Logger.Info("Find books by release date: '{0}' - '{1}'; sort results by '{2}'.\n", from, to, SortBy);
     }
 }

In the above example, SortBy is available to all of the actions defined on the Command. It can be set at the command-line by passing the --sort-by <value> switch. Odin will parse and set the switch before executing the action.

This functionality is now available in version 0.2 of the Odin-Commands nuget package.

TeamCity & “Works on My Machine”

I love TeamCity as a build/ci tool. I really do. However, I have noticed from time to time when I go searching their bug-tracking system that I’m in the midst of reporting a bug they had previously fixed. I’ve seen this more than once so this led me to wonder aloud on twitter if the TeamCity developers practiced TDD.

I followed the link to their blog post. In their discussion of how they do Continuous Delivery within their organization I would draw your attention to step 2:

While the build is running, two developers from the team (Duty Devs) review all commits made by the team during the day and if they both agree commits won’t break critical parts of the application, such as areas responsible for running and scheduling builds and build agent auto-upgrade, they put a special tag on the build marking it as “safe for deployment”.

emphasis added

Now let me draw your attention to Jeff Atwood’s wonderfully snarky blog post, Works on My Machine. Step 3:

Cause one code path in the code you’re checking in to be executed. The preferred way to do this is with ad-hoc manual testing of the simplest possible case for the feature in question. Omit this step if the code change was less than five lines, or if, in the developer’s professional opinion, the code change could not possibly result in an error.

emphasis added

Again, I love TeamCity. I think it’s a great product. I think it’s build-chains feature makes it far and away a better CI tool than any of the alternatives–especially in an enterprise-y SOA context. It is not my intention to shame them. Still, the producers of a tool widely used to perform the Continuous Integration side of TDD does not itself practice TDD.

Wow.

Announcing Odin-Commands 0.1.0

In the .NET space there are a number of good libraries to handle run-of-the-mill command line argument parsing.
My current favorite is a nuget package called simply CommandLineParser.
So why write a new one?

In short, current .NET command-line libraries focus only on parsing the args.
You are left reponsible for interpreting the args to call the right methods with the correct arguments.
What if CLI’s could be as easy to structure and execute as ASP .NET MVC applications?

Try it out!

Install-Package Odin-Commands

Resources

Feedback Welcome

Inspired By Thor

I’ve done some work with Ruby over the last couple of years and I was really impressed with the feature set offered by a ruby project called thor
In addition to a declarative approach to binding options to actions and arguments, thor supports the concept of subcommands.
We’ve seen subcommands used to great effect in command-line programs such as git and nuget, but current command line parser packages
offer little help with this feature.

Inspired by Convention Over Configuration

In ASP .NET MVC, urls are routed to the appropriate controller and action by convention. http://mysite.domain/Home/Index is understood to route to a controller called “Home” and invoke a method called “Index.”
In addition, little manual wiring is required because ASP .NET MVC can discover and instantiate the controller easily at runtime.
I wondered if it would be possible to use a combination of reflection and convention to create a command-line application in C#.

Show Me The Code

Setup Code

Write a class that inherits Command. You can also register SubCommands.
Add methods with [Action] attributes to indicate that they are executable.

<br />public class RootCommand : Command
{
    public RootCommand() : this(new KatasCommand())
    {
    }

    public RootCommand(KatasCommand katas)
    {
        base.RegisterSubCommand(katas);
    }

    [Action]
    [Description("The proverbial 'hello world' application.")]
    public int Hello(
        [Description("Override who to say hello to. Defaults to 'World'.")]
        [Alias("w")]
        string who = "World")
    {
        this.Logger.Info($"Hello {who}!\n");
        return 0;
    }

    [Action]
    [Description("Display the current time")]
    public void Time(
        [Description("The format of the time. (default) hh:mm:ss tt")]
        [Alias("f")]
        string format = "hh:mm:ss tt")
    {
        this.Logger.Info($"The time is {DateTime.Now.ToLocalTime():format}\n");
    }
}

[Description("Provides some katas.")]
public class KatasCommand : Command
{
    [Action(IsDefault = true)]
    public int FizzBuzz(
        [Alias("i")]
        int input
        )
    {
        FizzBuzzGame.Play(this.Logger, input);
        return 0;
    }

    [Action]
    public int PrimeFactors(
      [Alias("i")]
      int input
      )
    {
        var result = PrimeFactorGenerator.Generate(input);
        var output = string.Join(" ", result.Select(row => row.ToString()));
        this.Logger.Info($"{output}\n");
        return 0;
    }
}


To execute the program, just take call Execute(args).

The Program

class Program
{
    static void Main(string[] args)
    {
        var root = new RootCommand(new KatasCommand());
        var result = root.Execute(args);
        Environment.Exit(result);
    }
}

What Do I Get For My Trouble?

You get a command line executable that can be invoked like so:

<br />exe hello --who "world"                 # explicit invocation
exe hello -w "world"                    # argument alias
exe hello "world"                       # implicit argument by order

exe katas fizz-buzz --input 11          # explicit subcommand invocation
exe katas --input 11                    # subcommand + default action
exe katas -i 11                         # subcommand + default action + argument alias
exe katas 11                            # subcommand + default action + implicit argument by order
exe katas prime-factors --input 27      # subcommand + non-default action + explicit argument
Psundle-Ruby

A powershell module for managing your ruby environments on Windows.

I’ve been working with Ruby in a Windows environment for a little over a year now. I’m sad to say that community support for Windows developers is lackluster. We are second-class citizens.

The most frustrating example of this is the lack of decent ruby version switchers. rvm doesn’t install on Windows at all. Ditto for rbenv. uru is a valiant attempt, but it is cumbersome to install and it’s API is less than intuitive.

The Need

This wouldn’t much of an issue if ruby installations were backwards compatible, but that is not the case. Even minor version releases of Ruby can incur breaking changes ruining your execution environment.

For development purposes, it’s a good idea to install the new ruby version, switch your ruby environment, then run all your tests on all your projects to verify compatibility. If you need to roll back, just switch your ruby environment and everything is good.

The Strategy

As I started digging into how tools like rvm and rbenv work, I became surprised a the difficuly of reimplementing them on Windows. Aside from the installation features (e.g. rvm install ruby-version), ruby version management is basically just editing the PATH variable. In other words, the great barrier, the monumental technical challenge that prevents anyone from developing an easy-to-install , easy-to-use ruby version switcher is: string manipulation.

The Requirements

A ruby version switcher needs to know the location of installed rubies. It needs to be able to alter the PATH for the current session such that the desired ruby is the one being used.

Note

It is not the norm for Windows developers to think about altering their terminal session. It is the norm that alterations to the PATH are permanent. Reorienting our thinking around editing our Session as against our Environment has many benefits which I won’t go into here–except to say that it makes the issue of version switching much simpler.

My Solution

I wrote a powershell module called psundle-ruby (compatible with psundle) to discover and manage ruby versions. If you have not looked at psundle, I encourage you to do so as it makes installation of this module as simple as:

Install-PsundleModule 'crmckenzie'  'psundle-ruby'

If you have already installed rubies in your Windows environment, you can execute Register-RubiesInPath to make Psundle aware of them. Otherwise you can invoke Register-Ruby for each ruby location not found in your PATH.

Invoke Use-Ruby to switch to the desired version. The argument to Use-Ruby is an expression. The command switches to the first ruby it finds that matches the expression.

For example, if I have the following rubies installed on my machine:
* ruby-1.9.3
* ruby-2
* ruby-2.1
* ruby-2.2

I can invoke Use-Ruby "1" to switch to ruby-1.9.3. Invoking Use-Ruby "2" will switch me to ruby-2.

Invoke Set-DefaultRuby to permanently alter your PATH variable to automatically select the chosen ruby.

Committment to Maintain

I commit to maintaining this powershell module through the end of 2016. If you find issues, please report them or (even better) submit a pull-request. I will reevaluate my committment at the end of 2016 based on the level of interest and usage of this module.

Announcing Psundle: A Vundle-like Module Manager for Powershell

Vim & Vundle

I finally bit the bullet and learned to use Vim competently. One of the things I was really impressed by in the Vim space is a plugin called Vundle. Vundle is a package manager for Vim plugins. At first, swimming in a sea of package managers, I was loathe to learn another one–but Vundle is extremely simple. It uses github as a package repository. “Installing” a package is basically as simple as running a git clone to the right location. Updating the package is a git pull. Security is managed by github. Genius.

Powershell

As a developer on Windows I find Powershell to be an extremely useful tool, especially when running in an excellent terminal emulator such as ConEmu. One of the problems that Powershell has is that there is no good way to install modules. The closest thing is PsGet.

What’s wrong with PsGet?

Nothing. PsGet is great. However, not every powershell module can be made public, and not every powershell module developer goes through the process of registering their modules at PsGet.

Introducing Psundle

I thought to myself, “Hell, if Vundle can install modules directly from github, I should be able to implement something similar in Powershell” and Psundle was born.

Psundle is a package manager for Powershell modules written in Powershell. It’s only dependency is that git is available in the PATH.

Disclaimer

Psundle is an alpha-quality product. It works, but API details may change. It will improve if you use it and submit your issues and/or Pull Requests through github.

Installation

You can install Psundle by running the following script in powershell:

iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/crmckenzie/psundle/master/tools/install.ps1'))

In your powershell profile, make sure you Import-Module Psundle. Your powershell profile is located at

"$home/Documents/WindowsPowershell/Profile.ps1"

I advise that you don’t just run some dude’s script you found on the internet. Review the script first (it’s easy to understand). Please, please, please report any installation errors with the self-installer.

Installing Powershell Modules With Psundle

Install-PsundleModule "owner" "repo"

For example, if you want to install the module I wrote for managing ruby versions on Windows, you would run:

Install-PsundleModule "crmckenzie" "psundle-ruby" http://github.com/crmckenzie/psundle-ruby

What does this accomplish?

As long as you have imported the Psundle module in your profile, Psundle will automatically load any modules it manages into your powershell session.

Other Features

Show-PsundleEnvironment

Executing Show-PsundleEnvironment gives output like this:

Module Path Updates HasUpdates
Psundle C:\Users\Username\Documents\WindowsPowerShell\Modu… {dbed58a Updating readme to resolve installation … True
ChefDk C:\Users\Username\Documents\WindowsPowerShell\Modu… False
Ruby C:\Users\Username\Documents\WindowsPowerShell\Modu… False
VSCX C:\Users\Username\Documents\WindowsPowerShell\Modu… False

Update-PsundleModule

I can update a module by running:

Update-PsundleModule "owner" "repo"

If I’m feeling brave, I can also Update-PsundleModules to update everything in one step.

Requirements For Installed Modules

Because Psundle ultimately just uses git clone to install powershell modules, Powershell modules in github need to be in the same structure that would be installed on disk.

Primarily, this means that the psm1 and psd1 files for the module should be in the repo root.

Committment to Develop

I’m making a blind committment to maintain this module through the end of 2016. “Maintenance” means I will answer issues and respond to pull requests for at least that length of time.

Whether I continue maintaining the module depends on whether or not people use it.

Previous Page · Next Page