Monthly Archives: October 2009
Agile and Concentration

I was reading last night on NoodleFood on the Complexity of the Conceptual Mind. Diana reposts a blog about problems of concentration. The original author starts by giving an example of how a seemingly simple task can spiral into a long sequence of related tasks. The end result of this process is often a failure to accomplish the original task. The ability to focus on long and short-term goals at the same time is a key component of project success.

Joel Spolsky has a recent post entitled Capstone projects and time management. His article begins by discussing the failure of universities to prepare students for real-world software projects, and ends with this comment:

I’ve been blaming students, here, for lacking the discipline to do term-length projects throughout the term, instead of procrastinating, but of course, the problem is endemic among non-students as well. It’s taken me a while, but I finally learned that long-term deadlines (or no deadlines at all) just don’t work with professional programmers, either: you need a schedule of regular, frequent deliverables to be productive over the long term. The only reason the real world gets this right where all-student college teams fail is because in the real world there are managers, who can set deadlines, which a team of students who are all peers can’t pull off.

Regular frequent deliverables? That sounds like the agile value of short iterations to me. What’s important here is that two separate needs of the human mind are met by small processes. The first is a need to see how what you’re doing fits into the larger project or task you’re trying to accomplish. The second is to aid your ability to concentrate on the task at hand.

TDD

I find this discussion of time-management to be very interesting. Since I’ve adopted agile programming methodologies, I’ve noticed an increase in my productivity. Some of the productivity gains stem from my increased mastery of good design practices. But I think the major factor is my focus on short deliverable iterations. Writing unit tests is a way of maintaining the context of what I need to get done. Often I’ll be writing a test for feature-A, which I’ll discover depends on feature-B that I haven’t written yet. I can stop working on test-A and go write feature-B, building tests for feature-B as I go. When I’m done with feature-B, I still have the failing test for feature-A to remind me what remains to be done. On a day to day basis, the tests allow me to branch into sub-tasks at will without too much mental strain. TDD, in addition to its other virtues, is an aid to concentration.

Refactoring

Disciplined refactoring can also be an aid to concentration. In Martin Fowler’s Refactoring: Improving the Design of Existing Code, he discusses in detail many of the ways code can be changed. He starts with examples as simple as renaming a variable. Over the course of the examples, it becomes clear that larger refactorings are composed of smaller ones. The main takeaway for me was that in order to be successful with large refactorings, I have to break the task into smaller ones and complete each small refactoring before moving on to the next one.

For example, let’s say I need to split a class into two smaller classes. I can introduce the new class and immediately move all members from class A that I think belong in class B. If I do this, I’ll immediately break large pieces of the system. I’ll have to deal with all the various compile-errors and redirection points all at once, and this before I have a chance to verify that this is really what I want to do. On the other hand, I could create class B and begin my exposing it as a property of A. I could then move one property or method at a time from A to B, correcting the compile-errors as I went. When I have A an B looking the way I want I can simply remove the property from A, completing the separation.

Again, at any time during the refactoring it is only necessary to be attentive to a few issues at once.

Long Term Tasks

Not every task can be finished in a few hours. For longer-term tasks I have to break my work into milestones that are meaningful to me. I’ve started using an online tool called Zen to keep track of these things. I discovered agilezen in a recent .NET Rocks episode. I’ve been using it for awhile now and find it to be exactly what I need to keep track of the larger details of my work.

Conclusion

To run a successful project you must break long-term goals into shorter-term goals. This is for the same reason that a successful architecture requires that you break your project into smaller components, components into smaller classes, and classes into smaller functions. Software development is like any other task in life. You have to adapt your development and task-tracking methods to facilitate success. There’s only so much you can hold in your mind at once, but agile development methods allow you told tend the trees as you grow the forest.

Ninite

While waiting on vm installations of Ubuntu and Debian Linux, I discovered ninite.com via gizmodo. Ninite will bulk-install a range of different free apps on Windows systems. As I’ve just upgraded my laptop to Windows 7, I’m happy to find this time saver!

The way it works

On the main page, you select the apps you wish to install, and ninite builds a custom installer based on your selections. This is a file that you download, so it is transferrable to other machines. If you have other free products you’d like to see as part of the ninite installer, you can recommend them to the site.

When I first tried executing the custom installer on Windows 7, I got an unspecified error. Running the installer as Administrator seems to have resolved that issue. In addition, Ninite was smart enough to detect which software was already installed.

My choices

Chrome Browser

Windows Messenger and Google Talk

iTunes and Hulu Desktop

Paint.NET and Picasa

Flash Player for IE and other browsers and Silverlight

Google Earth

7-Zip

Notepad++

I took the opportunity to suggest Kantaris as another media player, and Virtual Clone Drive for DVD image software. I used to use Deamon Tools Lite but I haven’t been able to get it to work in 64-bit Windows 7.

Original Gizmodo link: http://gizmodo.com/5388680/ninite-helps-you-upgrade-to-windows-7-by-installing-up-to-58-great-apps-at-once

MVVM and Modal Dialogs

The best thing I learned this week was a good way to use MVVM in connection with modal dialogs.I spent a good deal of time scouring the internet for a good answer to this problem. Most of the answers I found involved a lot of code to get up. The issue I have with those kinds of solutions is that if I integrate them into a production project, I’m responsible for supporting them whether I understand all of the code involved or not. I’m sure that over time I’ll acquire the mastery of WPF to be able to create those kinds of solutions, but I don’t have that level of mastery of WPF right now.

The aha moment came with I read this thread: http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/0befde65-27e0-43ab-bd9f-6b1df38b7ab3 If you scroll down to the Sergey Pikhulya post (October 8, 2009, 6:42 PM) you’ll find the post I mean. Sergey suggested creating an Interface for IModalViewService, and injecting the an implementation that relies specifically on WPF Windows at run time. In this manner, you can create dummy implementations of IModalViewService for unit-testing your ViewModel.

Honestly, I felt kind of stupid for not thinking of this before. My application already supports injection of configured data services at run time. This is really exactly the same problem with the same solution—just on the presentation/UI layer instead of the data layer.

I thought of a few other solutions to the problem as well.

  1. Use events. Create events for each time a visual request needs to be made to the user.
    1. Pros: Easy.
    2. Cons: you have to create an event for each kind of prompt (message, OK/Cancel, Input).
      1. Doesn’t easily support specialized data templates for specialized dialogs.
      2. Leads to an explosion of code as you add new kinds of EventArgs sub-classes.
      3. You have to write a lot of glue code to make the interaction between MVVM and Xaml happen.
  2. Create command properties on the ViewModel and assign them in the Xaml.
    1. Pros: Easy
      1. You gain good separation because the VM doesn’t need to know how to execute the command, and the use of the dialog is encapsulated in the right layer.
    2. Cons: You still have a testability problem. If the command logic is complex, you are putting a lot of untestable code in the UI layer.
Starting Point

This is my first post on a new blog. My goal with this blog is to continuously improve my skills by sharing what I’ve learned, and inviting constructive criticism from the developer community.

I contract full time developing line of business applications for a local manufacturing company. I’m currently focused on developing desktop applications for machine operators in the plants which provide real-time process flow information to decision makers. I also do some part-time work on nights and weekends.

I mostly enjoy working with the Microsoft tool stack, including Visual Studio and SQL Server. Recently, I’ve fallen in love with WPF. Over the last year I’ve become compentent in WPF, WCF, Silverlight, Linq to Sql, and Entity Framework.

Enum Considerations

I’ve been dealing with Enums and databases for awhile now. I think it’s something every LOB developer faces. Enums are developer-friendly because provide context to a delimited range of options in an easy to understand way. They do come with a couple of drawbacks however.

The first is that they’re not friendly for users. That Pascal-cased identifier looks great to a developer, but users aren’t really used to reading “RequireNumericData” and making sense out it. Since this post is supposed to be more about databases than UI, I’ll provide a quick snippet of code I use to get user-friendly descriptions of enum values:

public static class EnumService
{
    public static string GetDescription(System.Type enumType, System.Enum fieldValue)
    {
        var enumFields = from element in enumType
                             .GetFields(BindingFlags.Public | BindingFlags.Static)
                         select element;
        var matchingFields = enumFields
            .Where(e => string.Compare(fieldValue.ToString(), e.Name) == 0);
        var field = matchingFields.FirstOrDefault();

        string result = fieldValue.ToString()
                        .SplitCompoundTerm()
                        .ToTitleCase();

        if (field != null)
        {
            var descriptionAttribute = field
                .GetCustomAttributes(typeof(DescriptionAttribute), true)
                .Cast<DescriptionAttribute>()
                .FirstOrDefault();

            if (descriptionAttribute != null)
                result = descriptionAttribute.Description;
        }

        return result;

    }

    public static string GetDescription<T>(T fieldValue)
    {
        var fv = fieldValue as System.Enum;
        var result = GetDescription(typeof (T), fv);
        return result;
    }
}

I refer to two extension methods on the string class, SplitCompoundTerm() and ToTitleCase(). Here is the code for them:

public static string SplitCompoundTerm  (this string source)
{
    var chars = source.ToCharArray();
    var upperChars = from element in chars
                     where Char.IsUpper(element)
                     select element;

    var queue = new Queue<char>();
    upperChars.ForEach(queue.Enqueue);

    var newTerm = new List<Char>();
    var lastChar = default(char);
    foreach (var element in chars)
    {
        if (queue.Count() > 0 && queue.Peek() == element)
        {
            if (lastChar != ' ')
                newTerm.Add(' ');
            newTerm.Add(queue.Dequeue());
        }
        else
        {
            newTerm.Add(element);
        }
        lastChar = element;
    }

    var newCharArray = newTerm.ToArray();
    var temp = new string(newCharArray);
    var result = temp.Trim();
    return result;
}

public static string ToTitleCase(this string source)
{
    var result = CultureInfo.InvariantCulture.TextInfo.ToTitleCase(source);
    return result;
}

SlitCompoundTerm() bothers me a bit. All it’s doing is looking for upper-cased characters in the middle of the term and inserting a space before them. I’m sure there’s a way to do that with regular expressions, but I haven’t gotten around to figuring that out yet.

A second drawback is that you often have to decide how to store enum values in a database. Do you store the numeric value? The text? Do you create a table that contains both the numeric and text values? Would that be one table for each enum, or one master table for all enums and an addition “enum type” column? Storing the numeric value is nice because it’s easy, but it’s very hard to report on. If your enum values are binary, then a single numeric value is going to be great for searching. If you need to report on the data you really want text. Text creates its own issues because you can’t really do a search on all records with a state of “Pending | Ready”  using text. All of the developers problems are more easily solved by the numeric value, and all of the reporting problems are more easily solved by descriptive text. I’m sure some of you won’t like what I’m about to propose, but here it is: I store both values. Every enum value becomes not one, but two fields on the record in the RDBMS. From a data integrity standpoint, I can get away with this because I route all my data access calls through a well-defined data access layer. My apps never manipulate the RDBMS directly. While this approach solves my problems, it does come with the cost of increased data storage. This may or may not be an issue for you. It hasn’t been one for me so far.

Happy Coding!