Category Archives: Uncategorized
TIL: How to break a Windows Server with a “

The Context

I’m working on bringing Octopus Deploy into our company. We have a good number of TopShelf services. Octopus Deploy does not directly support installing Topshelf services so I’m creating the functionality in a Step Template. While attempting to install my service I got the command-line arguments slightly wrong. Instead of deploying a service called MyService I deployed a service called "MyService". The services manager on the target server recognized the service, but neither the Powershell function Get-Services nor sc.exe could find it. I was eventually able to get a reference to the service using Get-WmiObject Win32_Service -Name "MyService". I tried calling the delete method on this service and it did not succeed.

I was stuck. I can’t uninstall the service because none of the built-in tools recognize it.

I had no choice but to kill the server and rebuild it. It’s a good thing that we’re trying to treat servers like cattle and not pets.

Deferred Technical Debt is Just Bad Design

Technical Debt is a metaphor to describe software rot. The idea is that each time software engineers take shortcuts in the code they incur “debt” in the code base. Each time future engineers must read and/or modify the indebted code, they pay “interest” on the debt in the form of longer project times and increased risk of defects due to unreadable code.

Some argue, and I’m one of them, that sometimes it is necessary to incur technical debt for the sake of speed. In a personal example I had a feature that was required to be implemented inside a week. My team estimated the work at 2 weeks. This was unacceptable due to an externally imposed deadline. The team offered a hacky solution to the problem that resulted in a quick turnaround. We offered the solution on the condition that we would immediately be given the 2 weeks to correct the design flaw. Agreement was reached and we released the feature in 1 week. We finished the feature in 3 weeks.

When you take on technical debt, you don’t reduce the cost of the feature–you increase it. You take on the work of the hacky solution and the work of reworking the hacky solution.

The reason we were able to reach this agreement is that our product owner and our team all understood that bad code is more expensive. We deliberately wrote bad code for the sake of instant gratification and then we immediately paid the full price for good, tested code.

We’ll fix it later

My colleague @jrolstad likes to say

Technical Debt is the lie we tell ourselves that we’ll come back and fix it later.

This is a phenomenon widely observed by many software engineers. We complain that the software is rotting and are promised an opportunity to “fix it later,” but “later” never seems to come.

What is always coming but never arrives? Tomorrow

— Children’s joke

Your code is your design

There’s an antipathy toward useless documentation in the Agile community, especially documentation about what code does. “The best documentation of the code is the code.”

When it comes to the design of their software, many developers make the mistake of thinking of the system in terms of their aspirations for the code base. The design of the system is always its current state. If you take a policy of accumulating technical debt in your code base then your actual design is a mess–not the gleaming structure of rationality you imagine it will be when you fix the technical debt… later.

What can we do?

If your organization has a legitimate need to take on technical debt, we can insist that the work to repair the debt be placed on the work calendar immediately. Most of the time the “business value” of getting a feature delivered fast is an attempt to pretend that a feature doesn’t cost as much as it does.

As a software engineering professional, we should not pretend that features do not cost what they do. We should not lie to the business, nor help them lie to themselves.

Estimation

These ideas have an implication with respect to estimation. If you owned a home and wanted to add a room on the second floor, how would you react if your contractor said “Well, these beams are rotted. We can probably build the room without replacing the beams, but your house may collapse in ten years. What do you want to do?” If you are anything like me, you would be appalled that the contractor even offered the option. The fact that the beams are rotted requires that they be replaced. This is not optional. The problem is not “solved” by building a room on rotted beams that will collapse in 10 years. It is not solved even if we know we are going to sell the house in 5 years (I’m looking at you startups).

The engineer should not offer to build the room on rotted beams. We should not offer or suggest alternatives to the business that result in the accumulation of technical debt. We should be honest with ourselves and with our employers about the real cost of the work they have requested.

But what if there’s a legitimate reason?

Is it always wrong to accrue technical debt? I don’t think so. As I stated earlier, I believe it can be acceptable to take some short-term shortcuts to get a solution out fast. However, this should be anomalous and the resultant mess should be cleaned up immediately following the achievement of the business goal. It needs to be understood by all parties that the shortcut costs more, not less.

How do I sell this to the business?

This is a complex topic and I don’t have all the answers. I have some answers and some promising leads.

First, stop presenting hacky alternatives in your estimate. Look at the code and honestly assess what it will take to alter it correctly. What is it going to take to add appropriate tests where they are missing? To clean the code so it will tolerate the change? To make the change? To repair an architectural deficiency? Estimate and present your estimate confidently and do not offer hacky options. If the business wants to negotiate, ask which features they would like to drop from the project. Do not offer to take engineering shortcuts.

Another tactic you can take is to measure estimates vs. technical debt in the code. You’ll have to start collecting some data for this approach and it will take some time. You’ll need:

  • Project estimates
  • How long the projects actually took
  • Cyclomatic complexity for the affected code.

You’ll want to relate the accuracy of estimates to complexity. You should see that estimation accuracy decreases as complexity increases. For extra points, you can look at actual project estimates for similar features in code bases with different complexities. You should see that that more complex code bases are harder to maintain–even from an estimation perspective–than their simpler counterparts. You can use this information to put teeth into the claim that unclean code costs the company money.

I’d love to hear any other ideas you might have in the comments!

Nobody is Complaining

Sometimes we fall into the trap of thinking that since no one is complaining about our work then everyone must be happy with it. This is a dangerous mode of thought because our customers may not in fact be happy with our work, and because it inhibits us from improving ourselves. Getting accurate customer feedback about our performance is critical if we are to continually improve.

A Personal Example

At Redacted Financial I have some resource needs that are slow to be filled. There’s a process in place to make sure you get the right resources, but budgetary constraints mean that resource needs must be demonstrated before they are doled out. This makes sense to a degree but:

Budgets are there to keep you from being irresponsible. They should not keep you from being smart. 
--Chris McKenzie

Yes, I’m quoting myself–but I really like that formulation :). The process to get more resources is just painful enough that my team members are averse to going through it. Our team has similar needs across the board so iterating through the approval process for each member of the team strikes me as wasteful. I’d rather identify a baseline for all of my team members and start each person with those resources. When I proposed this idea I was told:

We have resources allocated in wildly different ways and nobody is complaining. Clearly there is no standard baseline for your team.
--The Gatekeeper

I thought this attitude was interesting. In the mind of this engineer, “nobody is complaining” is equivalent to “everybody has what they need.” Sometimes as engineers we operate on the assumption that everyone complains if something is bothering them.

This is false. Marketers have known this for years. That’s why they spend so much time and effort to get you to fill out customer satisfaction surveys. Sometimes the most valuable information you can mine for is “where am I failing?” Marketers know that for every person who is vocal about there complaints, there are hundreds they will never hear from.

How can you find out if your customers are happy?

Let me ask a different question first: “Who are your customers?” There are different people who are impacted by your work. They are all stakeholders, but not every stakeholder is your customer. If you can’t look at your stakeholders and clearly identify your primary customer, consider using a Responsibility Assignment Matrix.

I found it useful in a recent project to use the RASCI matrix which is defined as follows:

  • Responsible – This is you in our example
  • Accountable – This is your customer. This is who you are accountable to. This isn’t your boss (necessarily). This is the person who will use the end-product of your work. It’s to enable them in their goals that you are doing yours.
  • Support – These are stakeholders who will support you in your efforts, but who do not directly consume the end-product of your work.
  • Consulted – These are stakeholders who should be consulted about your work. They may need to approve some aspect of what you’re doing, or they may have important insight about how you should go about your job.
  • Informed – These are stakeholders who should be informed about your work and/or your progress.

These matrices are often used to facilitate the repair of organizational disfunction, but they are also useful simply to clarify the roles & responsibilities of all of the stakeholders in your personal work ecosystem.

Once you’ve identified your primary customer, the simplest way to find out if they are happy with your work is… ask them. Face-to-face conversations are nice where possible. If you are lucky enough to be able to have a face-to-face conversation with your customers, try to adopt an attitude that is open to criticism. Don’t interrupt what they’re saying with explanations or excuses–even if you disagree with what they’re saying or if you think they’re wrong about something. There will be time for responding later. For now, your task is to listen and gather as much information about their assumptions and agendas.

Some customers will be confrontation-averse so a face-to-face conversation may not yield honest results. As mentioned before, surveys might be useful. An anonymous comment box (or anonymized email account) could work. It’s on you to figure out how to mine the information.

What do I do with customer feedback once I get it?

It may be hard to get the information. It may also be hard to hear it. You should take some time to reflect on the feedback before you respond, especially if it’s negative. Try to distance yourself from any initial emotional reaction so that you can consider more than just what the feedback says about you and your work. What is the underlying agenda your customer is trying to achieve? Are you helping or hindering that agenda? Are they lacking any key pieces of information? Are there other easy solutions to their problems?

When your customer gives you feedback, the worst thing you can do is not respond to it. If you fail to respond to feedback–positive or negative–you send the message that you do not value it. If you don’t value the feedback, it shows you don’t value your customer. You should respond to feedback even if you’re not sure you can do anything to address their underlyng complaint.

In our recent town hall, it was expressed that working on project teams instead of product teams isn’t ideal from the perspective of collective code-ownership. The response was “The Business doesn’t want to work that way.” We are not resourced to have product teams for the 50 or so applications we manage. Business priorities shift and sometimes require all of our resources to concentrate on a few applications at a time. This wasn’t happy news for our development teams, but it is understandable. My point is even if you can’t do anything to address the critical feedback due to issues beyond your control, you can respond to the feedback by explaining the other constraints to your customer.

The response to the town hall was overwhelmingly positive. Sometimes, it is enough for your customer to hear a good reason why it’s hard to accomodate their needs to reduce their frustration.

Development Organization Town Hall

This week at Redacted Financial we had a “town hall” style meeting that included the entire software engineering organization. The meeting was run lean coffee style. No work-related topic was off limits. Our director made himself available for 2 hours to answer any questions about why our organization is run the way it is.

We discussed:

  • Why we have a separate project management group.
  • The rationale behind our work-from-home policy.
  • Why we tend toward project teams rather than product teams.
  • How we can address technical debt & product direction using project teams.
  • Why our teams are sized the way they are.
  • Maintenance and dissemenation of current best practices, standards, and guidelines for different kinds of softare.

This meeting is similar to a retrospective at the end of a project or sprint. It is different in that there are things that we can’t change because they are constraints imposed on us by the business, but understanding where those lines are and what we can change was valuable to everyone. One of the things that we are changing is our work-from-home policy. Those who advocated for a more liberal policy are involved in a small team working with the Director to write up a new policy.

Our Director offered to have a version of this meeting annually. The team immediately objected “We should have this meeting quarterly!” So we are. We’re going to allocate 1hr per quarter for this style of meeting.

The value-add as I see it is that the team can be involved in changing how their organization works within the larger enterprise. Where there are barriers, the team can be informed about what they are and why they’re there. For those who feel strongly that a given barrier shouldn’t be there, they can direct their attention to problem-solving how to remove the barrier while still accomplishing the business objectives that caused the barrier to be erected in the first place. A side-effect of empowering team members to make change is that it reduces the amount of work management has to do to identify opportunities for change and push the through.

The town hall meeting was a very positive experience for our team. I wonder if anyone else has done anything similar and what their results have been.

Agile Antipattern: Myopic Planning

I recently came across this tweet:

Read the entire article. It’s fascinating. Here are some choice quotes:

work gets atomized into “user stories” and “iterations” that often strip a sense of accomplishment
from the work, as well as any hope of setting a long-term vision for where things are going.

Instead of working on actual, long-term projects that a person could get excited about, they’re
relegated to working on atomized, feature-level “user stories” and often disallowed to work on
improvements that can’t be related to short-term, immediate business needs (often delivered from
on-high).

the sorts of projects that programmers want to take on, once they master the basics of the craft,
are often ignored, because it’s annoying to justify them in terms of short-term business value.
Technical excellence matters, but it’s painful to try to sell people on this fact if they want,
badly, not to be convinced of it.

Under Agile, technical debt piles up and is not addressed because the business people calling the
shots will not see a problem until it’s far too late or, at least, too expensive to fix it.
Moreover, individual engineers are rewarded or punished solely based on the optics of the current
two-week “sprint”, meaning that no one looks out five “sprints” ahead. Agile is just one mindless,
near-sighted “sprint” after another: no progress, no improvement, just ticket after ticket after ticket.

I can summarize my own views on Scrum by saying “Scrum is a process that teaches a team how not to need Scrum.” I’ll likely expand on that idea later, but it is not the focus of this post.

Short Term Planning

A common theme running through many of Church’s criticisms is the failure to plan long-term. Larger engineering tasks do not get done because they cannot fit into an arbitrarily short time-frame. Technical debt gets ignored. Software rots and it is regarded as normal.

Church places the blame on Scrum and appears to argue that engineers should be in charge of the project planning so that they can take on the larger, longer-term, more-expensive iniatives that will ultimately save the company money.

Checks & Balances

We’ve tried that. That was the default state of the industry since its inception. Software engineering was akin to black-magic to most people, but for others it’s just grunt-work. (I once had a supervisor say to me in all seriousness “I had a class in C in college. A loop is a loop right? Anybody can do this.”) Software engineers would spend their time building systems and frameworks for features that were ultimately not needed. (They still do. Watch this talk by Christin Gorman for a real-world example in a modern project.) This tendency of engineers to “gold-plate” their code or work on “what’s cool” resulted in huge expense-overruns for software that was delivered late, over-budget, and often did not do what was expected of it.

In an attempt to get some kind of control over engineering projects The Business started taking more direct control over software projects. Since they could not trust the engineers to stay focused on delivering business value, they ruthlessly began cutting anything that did not directly contribute to features they could see. This was a problem because they did not often understand the tradeoffs they were making which resulted in software that did what it was supposed to in v1, but became harder and harder to maintain over time due to poor engineering.

Agile/Scrum attempts to address this problem by creating a clear separation of responsibilities. The Business is responsible for feature definition and prioritization. Development is responsible for estimation and implementation. The Business decides what is built and when but Development is in control of how. Instead of trying to teach The Business software engineering, Development communicates about alternatives in terms of estimates and risks.

Development: If you choose strategy ‘A’, we estimate it will take this long with these risks.
If you choose strategy ‘B’, it will take longer, but have fewer risks.

Committment

In order for this division of labor to be successful, each party must commit to it. Introducing a Scrum (or any other) process to a team does not make the team agile. Just because the engineers practice TDD and practices Continuous Integration, Continuous Deployment and other good engineering practices does not mean the team is agile.

The Business is part of the team. It is the Business’ responsibility to do the long-term planning. It is the engineer’s responsibility to communicate about estimates and risk. If either party fails to do their part, it is not a failure of “Agile,” but a failure of the people involved.

You might be tempted to accuse me of the “No True Scotsman” fallacy at this point. If so, then you are missing my point. The Agile Software movement is focused on a philsophy for interacting with The Business which values the contributions of all stakeholders and encourages trust. It is not a prescription for particular processes. No process will succeed if the stakeholders do not buy into the underlying philosophy.

Agile is about the people. It’s right there in the manifesto.

Individuals and interactions over processes and tools

An observant critic might respond at this point by pointing to another pieces of the manifesto:

Responding to change over following a plan

Isn’t this an endorsement of short-term thinking? No. At the time the manifesto was written, it was much more likely that software would be written in months or years-long iterations. This created a problem in that business circumstances would change in ways that diminished the value of planned features before they were delivered. It is not an injunction against planning per se. I call your attention to the last sentence in the manifesto:

That is, while there is value in the items on the right, we value the items on the left more.

To adapt a popular adage, “Plan long-term, work short-term.”

Communication

I don’t know many software engineers who entered the profession because they wanted to work with people. However, working with people is an absolute must in just about every profession. If we want the business to think long-term and make solid engineering choices we must learn to communicate with them.

The Responsibility of Communication is bi-directional. We must learn to communicate about estimates and risks. The Business must learn to consider risks as well as the cost. The Business should learn engineering concepts at a high-level (e.g., we can scale better if we use messaging).

It’s my observation that there often is a long-term plan but it is not necessarily communicated. As engineers, we must endeavor to find out what it is. We must be honest when we think we are being pushed to make an engineering mistake. If we feel strongly about an option, we must become salesmen and sell our perspective.

How do I Communicate with The Business?

I once had a product-owner come to me and ask for a feature. My team estimated 2 weeks to implement and test the new feature. He needed the feature in 3 days. He had some technical knowledge and offered a solution that might get the feature done faster. His solution would work, but it involved a lot of bad practices. We reached an agreement in which we would deliver the solution in 3 days using the hacky solution, but our next project would be to implement the feature correctly.

We were able to have this conversation because my team and I had a track record of delivering what was asked for in a working, bug-free state on a consistent basis. In other words, we had trust. It took some time to build that relationship of trust, but not as much as you might think. When I’m communicating with The Business about engineering alternatives, I make sure I answer 3 questions.

  1. What problem are we trying to solve?
  2. How long will each alternative take to deliver?
  3. What are the risks associated with each alternative?

I make sure that The Business and I are in agreement on the answers to all 3 of these questions. Then I accept their decision.

What if They are Still Myopic?

If you’re sure you’ve done everything you can to communicate clearly about short vs. long-term options and risks and if The Business insists on always taking the short-term high-risk solution, then you might be forced to conclude that you don’t like working for that particular business. It might be time to move on.

I hate saying it, but most of the companies I worked for in the early part of my career are places I would not go back to. I started my career in Greenville, SC. The first company I truly enjoyed working for was in Washington DC. I ended up in Seattle, WA where I found a company that does embrace most of my values. Do we have problems? Absolutely. However, with reason and communication as our tools, we are addressing them.

Simple Programmer Blogging Course

I’ve had a blog for years but my blogging frequencey is intermittent. I wanted to see what a successful blogger would say, so I took John Sonmez’ Simple Programmer Blogging Course. I can’t say that there were any lightning bolt insights. If I had to boil the course down in four
words it would be “get off your ass,” along with some helpful tricks to overcome common roadblocks to getting that next blog written.

Two of my biggest bottlenecks have been trying to figure out what to write about, and perfectionism. “Did I cover everything relevant to [topic]? Did I correctly render the forest and the trees? Am I certain about the advice I gave?” Sonmez’ course gave me techniques to deal with these bottlenecks for which I’m grateful.

If you are at all interested in blogging but a) are not sure how to get started, or b) have felt stalled, I would recommend that you take the course. You’ve got nothing to lose–it’s free!

The Normalization of Deviance

The Normalization of Deviance is a concept that

describes a situation in which an unacceptable practice has gone on for so long without 
a serious problem or disaster that this deviant practice actually becomes the 
accepted way of doing things. 

credit: challenger explosion

You’re familiar with this concept already. You’ve encountered everytime you’ve seen someone doing they know to be wrong but justifying it with “we’ve never had a problem before.” This is the person who consistently buys things s/he can’t afford with credit cards. This is the person goes out drinking and drives home. This is the software engineer who writes code without tests. This is the business that ignores through continual deferment technical debt issues raised by its engineers.

As software engineers, we know we should remove dead code in projects. We know that we should automate software deployment. We know we should provide reliable automated tests for our features. We know we should build and test our software on a machine other than our personal dev box. We know that we should test our software in a prod-like environment that is not prod.

Do you do these things in your daily work? Does your organization support your efforts?

How do you identify the Normalization of Deviance?

One of the challenges you will face identifying Normalization of Deviance is the fact that things you do on a daily basis are… normal.

There are several “smells” that could indicate that your organization is having problems.

  • You have “official” policies that do not describe how you actually do work.
  • You have automation that routinely fails and requires handholding to reach success.
  • You have unit or integration tests that fail constantly or intermittently such that your team
    does not regard their failure as a “real” problem.
  • You have difficult personalities in key positions who turn conversations about their effectiveness
    into conversations about your communication style.

All of these issues are “of a kind,” meaning that they are all examples of routinely accepted failure. This is obviously not an exhaustive list.

Why is it a problem?

You will eventually have a catastrophic failure. Catastrophic failures seldom occur in a vacuum. Usually there are a host of seemingly unrelated smaller problems that part of daily life. Catastrophic failure usually occur when the stars align and the smaller issues coalesce in such a way that some threat vector is allowed to completely wreck a process. This is known as the Swiss Cheese Model of failure. I first learned about the Swiss Cheese Model in a book called The Digital Doctor, which is a holistic view of the positive and negative effects of software in the medical world. This book is well-worth reading. The section on the deadly consequences of “alert fatigue” would be of special interest to software engineers and UX designers.

A Real World Example

A fascinating case of software failure that destroyed a company overnight is the story of Knight Capital. In a writeup, Doug Seven lays the blame at the lack of an automted deployment system. I agree, though I think the problems started much, much earlier. Doug writes:

The update to SMARS was intended to replace old, unused code referred to as “Power Peg” – 
functionality that Knight hadn’t used in 8-years (why code that had been dead for 
8-years was still present in the code base is a mystery, but that’s not the point). 

It’s my point. The Knight Capital failure began 8 years before when dead code was left in the system. I’ve had conversations with The Business where I’ve tried to justify removing dead code. It’s hard to make them understand what the danger is. “It’s not hurting anything is it? It hasn’t been a problem so far has it?” No, but it will be.

The second nail in Knight Capital’s coffin was that they chose to repurpose an old flag that had been used to activate the old functionality. As Doug Seven writes:

The code was thoroughly tested and proven to work correctly and reliably.
What could possibly go wrong?

Indeed.

The final nail is that Knight Capital used a manual deployment process. They were to deploy the new software to 8 servers. The deployment technician missed one. I don’t actually know this, but I can just imagine the technician staying after-hours to do the upgrade and wanting nothing more than to go home to his/her family or happy-hour or something.

At 9:30 AM Eastern Time on August 1, 2012 the markets opened and Knight began processing 
orders from broker-dealers on behalf of their customers for the new Retail Liquidity 
Program. The seven (7) servers that had the correct SMARS deployment began processing 
these orders correctly. Orders sent to the eighth server triggered the supposable 
repurposed flag and brought back from the dead the old Power Peg code.

There were more issues during their attempt to fix the problem, but none of it would have happened except that these 3 seemingly minor problems coalesced into a perfect storm of failure. The end result?

Knight Capital Group realized a $460 million loss in 45-minutes... Knight only 
has $365 million in cash and equivalents. In 45-minutes Knight went from being the 
largest trader in US equities and a major market maker in the NYSE and NASDAQ to bankrupt.

How do you fix it?

I’m still figuring that out. Luckly, Redacted Inc doesn’t have too many of these sorts of problems so my opportunities for practice are few, but here are my thoughts so far.

The biggest challenge in these scenarios is that people are so used to accepting annoying or unreliable processes as normal that they cease to see them as daily failure. It’s not until after disaster has struck that it’s clear that accepted processes were in fact failures. Nobody at Knight Capital was thinking “jeez, that dead code is really hurting us.”

There’s an old management adage: “You can’t manage what you can’t measure.” You can start addressing NOD issues by identifying risky patterns and practices that your organization uses in its daily standard operating procedure. If you can find a way, assign a cost to them. Consider ways in which these normal failures could align to cause catastrophy. If you have a sympathetic ear in management, start talking to them about this. Introduce your manager to these concepts. Tell them about Knight Capital. Your goal is to get management and The Business to see the failures for what they are. By measuring the risks and costs to your organization of acceptable failure you will have an easier time getting your voice heard.

Most importantly, come up with a plan to address the issues. It’s not enough to say “this is a problem.” You need to say “This is a problem and here are some solutions.” Go further still, show how you get your team from “here” to “there.” Try to design solutions that make the day to day work easier, not harder. Jeff Atwood calls this the Pit of Success. His blog scopes this concept to code, but it applies to processes as well. You want your team to “fall into” the right way to do things.

Another potential source of positive feed back are new members to your team. It may be hard to get them to open up for fear of crossing the wrong person, but they are new to your organization and they will see more clearly the things that look like failures waiting to happen. Nothing you are doing is normal to them yet.

Dependency Injection Patterns

Motivation

I’ve seen several different approaches to Dependency Injection, each of which have their own strengths and weaknesses. I run an internship program in which I teach these patterns to college students. I believe each pattern and anti-pattern has its pros and cons, but my observation is that even experienced devs haven’t fully articulated the cost-benefit of each approach to themselves. This is my attempt to do that.

Property Injection

“Property Injection” or “Setter Injection” refers to the process of assigning dependencies to an object through a Property or Setter method.

Example

<br />public class Widget
{
    public Bar Bar {get; set; }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

There are pros and cons to this approach.

Pros

  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified if you are looking at the class.

Cons

  • Temporal Dependency

What is a temporal dependency? Perhaps it’s best to illustrate with an example. Here is my first attempt at using the class as defined above.

    var widget = new Widget();
    widget.Foo("Hello World!");

Did you spot the problem? I never set Bar. When I attempt to call Foo I’ll get the oh-so-helpful NullReferenceException. I find out after I try to use the class that I have a missing dependency. I have to open the class to find out which dependency is missing. Then I have to modify my code as follows:

var widget = new Widget();
widget.Bar = new Bar();
widget.Foo("Hello World!");

It’s called a temporal dependency because I have to set it before I can call any methods on the class. What’s worse, the API of the class doesn’t give me any indication that anything is wrong until after I attempt to run it.

Method Injection

“Method Injection” refers to passing dependencies to the method that uses them.

Example

<br />public class Widget
{
    public void Foo(Bar bar, string someValue)
    {
        // snipped
    }
}

Pros

  • No temporal dependencies
  • dependencies are clearly communicated via the API
  • Easily replace dependencies with fakes for testing purposes

Cons

  • Method signature explosion
  • Method signature fragility
  • Clients have to concern themselves with the classes dependencies

What are method signature explosion and fragility? Method Signature Explosion means that arguments to my method signatures will increase as dependencies change. This leads to Method Signature Fragility which means that as dependencies change, clients of the method have to change as well. In other words, we lose the benefit of encapsulated logic.

Constructor Injection

Constructor Injection is the process of making dependencies available to a class through its constructor.

Example

<br />public class Widget
{
    private Bar _bar;

    public Widget(Bar bar)
    {
        _bar = bar;
    }

    public void Foo(string someValue)
    {
        _bar.SomeMethod(someValue);
    }
}

Pros

  • Enables easy faking for test purposes.
  • Keeps method signatures tidy.
  • Dependencies are clearly specified through the API
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility

Cons

  • none – other than those inherent to the nature of using Dependency Injection in the first place.

Of the three approaches listed so far, I strongly prefer Constructor Injection. I see nothing but benefits in this approach.

Lazy Injection

If you’re getting started with Dependency Injection, I strongly recommend researching a Dependency Injection Framework such as Ninject to make constructing your class hierarchies easy. If you’re not ready to bite that off you might consider using Lazy Injection. This is a technique by which your constructor arguments are given default values so that you can instantiate your class with all of your default system values at run-time, but pass fakes during test-time.

Example

<br />public class Widget
{
    private Bar _bar;

    public Widget(Bar bar = new Bar())
    {
        _bar = bar;
    }

    public void Foo(string someValue)
    {
        _bar.SomeMethod(someValue);
    }
}

You can do this with Property Injection as well, mitigating some of the cons of that approach. You are still left opening the class to figure out how and what to fake however.

Example

<br />public class Widget
{
    public Bar Bar {get; set; }

    public class Widget()
    {
        Bar = new Bar();
    }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

Service Locator

Service Locator is widely considered to be an anti-pattern. To understand why, read “ServiceLocator is an Anti-Pattern“.
Service Locator involves making an open-ended registry of dependencies widely available to any class that wants them.

Example

<br />public class Widget
{
    public Bar Bar {get; private set; }

    public class Widget()
    {
        Bar = ServiceLocator.Get<Bar>();
    }

    public void Foo(string someValue)
    {
        Bar.SomeMethod(someValue);
    }
}

On the surface this looks awesome.

Pros

  • Keeps method signatures tidy.
  • No temporal dependencies
  • No Method Signature Explosion
  • No Method Signature Fragility

Cons

  • Dependencies are not clearly specified through the API
  • Because my API doesn’t communicate my dependencies, I have to understand the classes’ implementation details to properly test it.
  • It encourages dependency explosion inside the class. This is another way of saying that a class with too many constructor arguments is a “smell” and I lose the benefit of being confronted with that “smell” if I use ServiceLocator.

Despite these flaws, it is sometimes useful as a scaffolding mechanism to introduce a Dependency Injection Framework into an application that was not designed with Dependency Injection in mind. I want to stress that I believe this pattern should only be used as an interim step on the road to full Dependency Injection Framework support in legacy applications. You should make it a point to remove ServiceLocator as quickly as possible after introducing it.

Closing Thoughts

These patterns are by no means exhaustive, but they are the common one’s I’ve seen over the years.

If you are using a Dependency Injection Framework (my favorite is Ninject), some of the cons of these approaches may be mitigated by the Framework. This may change the equation with respect to which method is appropriate for your use-case.

Odin-Commands 0.2.1 Released: Before & After Execute Hooks

Today I released Odin-Commands 0.2.1 on nuget.org.

What’s New?

I was writing a CLI command when I released it would be nice to be able to set default values for the Common Parameters on the command prior to executing the action.
The difficulty is that some of the default parameter values are composed from other parameter values but all of them are settable by the user.
To achieve this goal I added overridable OnBeforeExecute and OnAfterExecute methods to the Command class.

How do I use it?

<br />public class MyCommand: Command
{

  protected override void OnBeforeExecute(MethodInvocation invocation)
  {
     ApplyDefaultValues(); // or do other stuff prior to executing the invocation.
  }

  protected override int OnAfterExecute(MethodInvocation invocation, int result)
  {
    // you can return a different exit code if you need to.
    return base.OnAfterExecute(invocation, result);
  }

}

Odin 0.2 Released With a New Feature: Common Parameters

What are Common Parameters?

Common parameters are parameters that are repeated across multiple actions in a CLI context. For example, I might have a CLI that has takes a --verbose flag to switch on verbose output for all of my actions. Rather than require that the CLI developer add a bool verbose switch to every action in his/her Command implementation, Odin allows you declare a property on the Command implementation as a [Parameter].

Example

[Description("Provides search capabilities for books.")]
 public class SearchCommand : Command
 {
     [Parameter]
     [Alias("s")]
     [Description("The order to sort the results of the search.")]
     public SortBooksBy SortBy { get; set; }

     [Action]
     [Description("Searches books by author.")]
     public void Author(
         [Alias("a")]
         [Description("The author of the book being searched for.")]
         string author)
     {
         Logger.Info("Find books by author '{0}'; sort results by '{1}'.\n", author, SortBy);
     }

     [Action(IsDefault = true)]
     [Description("Searches books by title.")]
     public void Title(
         [Alias("t")]
         [Description("The title of the book being searched for.")]
         string title)
     {
         Logger.Info("Find books by title '{0}'; sort results by '{1}'.\n", title, SortBy);
     }

     [Action]
     [Description("Searches books by release date.")]
     public void ReleaseDate(
         [Alias("f")]
         [Description("The release date from which to search.")]
         DateTime? from = null,

         [Alias("t")]
         [Description("The release date to which to search.")]
         DateTime? to = null)
     {
         Logger.Info("Find books by release date: '{0}' - '{1}'; sort results by '{2}'.\n", from, to, SortBy);
     }
 }

In the above example, SortBy is available to all of the actions defined on the Command. It can be set at the command-line by passing the --sort-by <value> switch. Odin will parse and set the switch before executing the action.

This functionality is now available in version 0.2 of the Odin-Commands nuget package.

Previous Page · Next Page