Quick Update

I’ve been Internet-less for a few days and it’s been killing me. Internet is like coffee—it makes the world go round!

I’ve made a few updates to Yodelay that I wanted to tell you about.

First, I added an ASP .NET MVC project. I wanted to see if I could use the MVVM pattern in ASP .NET MVC. I’m not completely happy with my implementation, and the UI is kind of rough, but it works. I’ll work on cleaning it up later.

Second, I added a library for non-attribute-based validation. My problem with validation frameworks that rely on attributes is that I don’t always have access to the code for the classes I need to create business rules for. The new library uses a fluent API to configure rules for classes and properties.

Example:

ConfigureRules.ForType<BusinessObject>()
    .If(e => e.Id > 0)
        .Property(e => e.Name)
        .Must.Not().BeNull();
var testObject = new BusinessObject() {Id = 1, Name = "Testing" };

Rules.Enforce(testObject);


Third, I added some extension methods for the Range class which allow the developer to test for adjacent, intersecting, and disjoint ranges. Further, the range API will now find gaps in lists of ranges.

Finally, I removed the assembly signing. When I added the key files before, I password-protected them. This makes it hard for people who download the source code to compile it. I’ve removed all assembly-signing for the short term. When I”m ready to build and installation package, I’ll resign the files without password protection.

Moving to the DC Metro Area

My girlfriend Emily was recently accepted into a graduate program at Marymount University in Arlington, VA. Through the efforts of a diligent recruiter at Apex Systems I have accepted a position at USIS in the Vienna, VA area. This will be a big change for me. Not only will it mark the first time I have lived away from upstate South Carolina, it will also be the first time I’ve lived and worked in a metropolitan area the size of DC.

We have found an apartment in Annandale which is merely outrageously expensive (or in DC area terms, "cheap"). I am looking forward to attending the capitol area .NET developers group, as well as touring the homes of some of the nations’ founding fathers. Once I finish my degree, I intend to begin taking guitar lessons and finding ways to be useful to the Ayn Rand Center.

My new job will most likely focus on web development. I’m looking forward to dusting off my ASP .NET chops and giving them a much-needed upgrade. I’m also hoping to be able to do more experimentation with Silverlight and oData. My primary hobby interest at the moment is in fluent api’s. I’m getting much better at defining the API I’d like to have (yodelay has a recent example of that; see the “Require.That… api” in the Validation namespace), but I still lack the skills needed to bring the more complex API’s into implementation. Perhaps I’ll meet some devs in DC that can help me with that.

Yodelay Updated

Yodelay has been updated. I’ve modified the Regular Expression tester so that it supports n-number of test contexts with a single expression. In addition, you can search regexlib.com for a useful expression.

This screen is fully implemented using MVVM. The search window serves as an example of using MVVM with dialogs.

I’ve also pulled in the themes from WpfThemes and integrated the ThemeManager.

Check it out!

Neoshooter 1

Announcing the Yodelay .NET Framework Extensions Project

I have created an open-source project on CodePlex called “Yodelay”. From the project description:

The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include basic MVVM components, Unit Test Extensions, and a poor-man’s Dependency Injection library.

Most of the classes and extension methods in this library are rei-mplementations or adaptations of components I have written in other places. Some of them have been found in various forms on developer blogs. Where this is true, I have indicated where I retrieved the original source code in the comments. This library is not just a collection of random classes. The attempt here is to bring all of these components together and use them in an integrated fashion to develop stable applications faster.

Not all code in this project is currently covered by unit tests. This is because many of the classes were adapted from blogs or other existing projects and may not have been developed using TDD. Every effort will be made to bring existing classes under test coverage, and to practice TDD when adding new code.

Why did I call it Yodelay? For no other reason than I like the way the word sounds. It rolls off the tongue easily, which is the effect I hope the library has on applications.

There is no installer yet. Yodelay is in version 0.1, but there are already quite a few goodies in the library. Everything is built against the 4.0 framework. I will try to work on it a little each week, adding new components and demo projects when I can. In the near future I will be signing the assemblies and building an installer for them.

Enjoy!

Design Considerations of Public Functions

Writing a function is an act of design. You must choose the placement (both physical and logical), signature, and name of the function carefully. Most people understand vaguely that functions should have minimal scope, but I’d like to draw attention to a couple of design implications of public functions that I haven’t heard discussed much. These features are "permission" and “precedent.” Simply by existing, a function indicates that it is okay to call it, and establishes a precedent for the choices that led you to create it with the characteristics you chose.

Permission

When would you not want to publicly expose a function? Or what are the reasons you might wish to refuse calling permission to a function?

Here is a typical example: Suppose you have a WidgetListEditor class that manages a list of widgets bound to a list editing control on a screen. Suppose that part of the class’s task is to sort the list of widgets prior to binding to the grid. The sorting of the list is ancillary to the primary task of managing the binding to the  grid. Clients of the code should not instantiate an instance of WidgetListEditor to sort the contents of some arbitrary list of widgets elsewhere in the system. For this reason, the sorting methods of WidgetListEditor are internal concerns, and should not be exposed publicly. If the sorting methods are needed externally to WidgetListEditor, they should be extracted into first-class components of their own and provided to WidgetListEditor via some form of Dependency Injection.

Why would it be wrong to simply allow the clients of the code to instantiate WidgetListEditor at will and use the sorting method? It exposes the rest of the code to changes in WidgetListEditor. WidgetListEditor could be modified such that it loads an expensive resource when constructed. The maintainers of WidgetListEditor have no reason to believe that adding the expensive resource should cause any problems because as far as they know, this class is only instantiated once on a couple of screens. Suddenly the system is brought to a crawl because it is in fact being used all over the place as a means of “reusing” the sorting code. This is a problem of responsibility.

Or:

WidgetListEditor needs to be changed to allow arbitrary sorting by the user. This requires that the sorting methods be changed to accept a sorting criteria–but that change is made more difficult because the methods are being consumed application-wide instead of on the one or two screens the class was built for.

Or:

WidgetListEditor no longer needs to presort the contents of the list before binding, so the developer wishes to remove the sorting code–only s/he can’t because the methods are being consumed application-wide instead of on the one or two screens the class was built for.

There’s a recurring theme here: What was the class built-for? What is its exposure to change? A public method is a kind of contract with the rest of the code. Changes to public methods must be considered with greater care because of the potential effects with the rest of the code-base. When we make a method private, we reduce the potential of damage to the system if the method needs to change, or be removed altogether. This is the design advantage of encapsulation. For this reason, every function in the system (and every class in a library) needs to have the least significant scope it is possible to give it.

There is another kind of permissions problem related to public methods that is harder to pinpoint. Sometimes a method is created that does a narrow-purpose task that is needed within the system, but has the potential to corrupt the system. Posit a financial system in which every transaction is recorded as a series of debit-credit pairs. For the most part the system does not allow direct editing of entries. Instead it functions through a series of transaction templates. You submit a transaction, and the template tells it which accounts to debit, to credit, and how much, etc.. It is discovered that a rounding error creates a small reporting variance and your manager assigns you the task of creating a manual correction screen.

Since this is a transactional system, the right way to solve this problem is to create an AdjustmentTransactionTemplate and enter it into the system, but that would take a week, and you only have a day. So, you take a short-cut. You create a new method in the data access layer that lets you simply update the entry directly. You reason that since this is the only code that calls this method, it’s safe because you will be sure to update both the debit and credit together, and with the same amount. You do this, turn in the work, and everybody is happy.

What’s the problem here? There is now a method in the data access layer that allows you to manually update a specific entry without reference to a transaction template. It’s existence implies that it’s okay to use it. The methods that accept the transaction template can validate the template to verify that the transaction will balance on debits and credits. This manual entry edit method can do no such validation. This means that it is now possible to get bad data into the system which could have catastrophic consequences. At this point, it is a matter of "when, " not "if" that will happen. If you or your manager considers "it’s faster” to be a general justification for violating the design of a system, then your project will become less maintainable as time goes by. You may need to update your design to make it faster to develop against—but that’s a different problem than I’m describing here.

Precedent

Consider either of the above scenarios. What danger does the existence of either of the public functions pose to the rest of the code base? Isn’t the danger isolated to the particular components involved? What danger does the precedent pose to a code base?

In the legal system, the decision of a higher court on a relevant point is binding as legal-precedent for a lower court. Conversely, the decision of a lower-court on some legal matter is used as guidance to a higher court faced with the same decision. The reason for this is not that lawyers are crazy, but rather that the enormity of concrete circumstances possible in a legal case are too numerous for any one person or team to keep track of. When a new set of circumstances is considered by a court, it must make the best decision it can within the context of existing law and existing legal precedent. In most cases this keeps the range of legal issues organized in an intelligible manner. However, sometimes the courts make a decision that violates our sense of justice or propriety. In these cases, we can “refactor” by encouraging the legislature to explicitly change the law.

The key point is this: existing code is the best guideline for how to add new features to a code-base. You cannot expect future developers, to look at your code-base and infer the principles you built the application on if you are violating those principles yourself. Further, if you are willing to violate the design principles in one case, you will likely find it much easier to violate them again and again. This means that even though you “think” your application was developed according to a particular design, “with a couple of exceptions,” it will not be apparent to anyone else. Objectively the, the code was not designed according to any principles at all, but to your whims. To put the point another way, intent is communicated not by what’s in our heads, but what’s in our code.

If you use WidgetListEditor for sorting some place else in your code, other developers will think that that’s what you intended. If you bypass the transaction template logic in the financial system, other developers will think it’s safe to do that as well. The question of whether code is designed well is more than just a question of whether it works.  It’s a question of whether the intent of the design is consistently expressed by the code that implements it.

Conventions vs. Design

Conventions often function as a substitute for failure to grasp design principles. For example, I’ve seen teams get up in arms about using Hungarian Notation but then write meaningless function names like “DoIt()” or “Process()”, or meaningless variable names like “intVar” or “intX”. These same teams often have little or no understanding of design principles , patterns, or concepts. I have met so many developers that have never heard of YAGNI, LRM, SRP, LSP, Dependency Injection. How many developers do you know that have heard of patterns such as Singleton, Adapter, Decorator, Factory, Monostate, Proxy, Composite, or Command?

The use of conventions is often invoked as a means of aiding code clarity. And it does, but it is not the only or even the best means of attaining that goal. We need to learn to name classes, functions, and variables in meaningful ways; to write small functions that do one tiny little thing, and do it well; to write small single-purpose classes. All of the conventions in the world will not help you maintain a code-base if developers are free to write 3000-line methods with 27 layers of nested-conditionals and loops (and yes, I have seen this). Read “Clean Code” by Uncle Bob.

Design patterns, techniques, and processes are much harder to learn and master than a concrete list of do’s and don’ts (which is what most conventions amount to). Good software design is like playing a musical instrument–it requires dedication, repetition, and practice to learn and master.

Conventions do have a place. However, conventions are usually suggested by the provider of the tools you are using. For example, Microsoft has a conventions document for the .NET Framework which is applicable to all .NET Languages. (Thankfully, they eschew Hungarian Notation). It is true that this is worth learning simply in order to be a good developer citizen. My recommendation would be to simply use the conventions established by the tool-provider, or the dominant conventions within the community around the tool. Don’t waste company time and money adding your own tweaks and changes to the conventions document. Speaking personally, I would much rather maintain a code-base that is written cleanly and with good design and that violates every convention you could name.

Conventions are not a bad thing. The problem with conventions is that they are so often discussed in place of design principles. Conventions without design principles aid nothing. Without proper focus on good design, conventions can even hurt software quality because they give developers and managers the illusion of being thoughtful and disciplined about what they are doing.

Conventions can be useful in another way. There is a good bit of discussion going on right now about Convention over Configuration. CoC is a design short-cut, and consists of making decisions about how components of a software system interact by default, and only adding additional documentation when components deviate from the norm. CoC actually bolsters my point here, because it only tends to arise as a discussion point in systems that are already using good design patterns and practices.

Posted on April 3, 2010, 12:15 pm By
2 comments Categories: Design Tags:
The Presentation

So I spoke at GSP Developers tonight on TDD. It went well, though I was a bit nervous. I got through the example I chose with less code and fewer tests than I had with any previous attempt, though I still took about the same amount of time. I enjoyed the questions from the audience, and the event organizer made some great observations as well. I think I might enjoy doing something more long-form, such as a mini code-camp for teaching TDD.

Software Internships Won’t Help

Editors Note: I have since changed my mind on this post as I have developed and run a successful internship program for many years now. However, I will preserve this as-is for posterity.


Read this post from Uncle Bob, and be sure to watch the video—that’s the scary part.

https://sites.google.com/site/unclebobconsultingllc/blogs-by-robert-martin/developer-certification-wtf

I take issue with Uncle Bob’s idea of internship. Having Senior Developers mentor interns can create a culture of inbred, non-innovative practices. The temptation is to think that you can teach smart development practices. All you can teach is a litany of concretes. The junior developer has to turn their brain on to grasp the principles involved. In short, I don’t think Uncle Bob’s solution is that new, or that it would solve any of the problems he describes.

I think the problem is deeper than just lack of skill or training. What software development needs is a revolution in values. We as developers need to value high-quality, unit-tested, organized, clean code. We need to value the drive to excellence, not just the willingness but the desire to learn, and make room for innovation. Software development managers need to refuse to tolerate anything less. We need to stop sacrificing design and test to the altar of the arbitrary dead-line. Fortunately, we don’t need to start that revolution as it is already under way—Uncle Bob’s post being a recent shot fired in that war.

Update:

The link to Uncle Bob’s original url moved. I updated it.

A Good Project Manager

I was interested to read Roy Osherove’s account of his worst team leaders recently. Jason Crawford writes about what he thinks makes good team managers. They are not talking about the same role I think.

Roy is talking about a technical lead on a team of developers and his basic problem is the technical lead’s perceived lack of or interest in technical ability. His criticisms fall into basically two categories: training, and judgment. He wants a technical lead to help make him a better developer by judging it. A lead that refuses to judge is no lead at all.

Jason Crawford draws portraits of three kinds of managers, but the best, he says, focus on communicating values. He is not suggesting that a good project manager should moralize to his employees, but rather that the PM should have a clear idea of the values of the company and ensure that the work his reports do conforms to those values. If we apply this to Roy’s technical lead, Roy’s technical lead should value technical ability to the point that he would be willing to point out mistakes and help the developer become better at his job.

I recently had occasion to write a recommendation for a former Project Manager. With permisssion, I’ll reprint the entire recommendation:

“Maggie Roberts was a great project manager. She was great at communicating with both technical and non-technical personnel. She knew enough about the technical work to describe the expected results as well as the goal she hoped to accomplish with the results, and then she got out of the way and let me deliver the results to her. When I had a better idea of how to get the desired results, she allowed me to pursue it.

My favorite thing about working with Maggie was her directness and the clarity of her expectations. She was never shy about indicating what was and was not good about the work I turned in. Her criticism was never cruel or directed toward me as a person, but targeted the work I turned in and its relationship to our client. She was not shy with her praise either. She had the same directness with pointing out great things I did as she did with errors. She always related both praise and criticism directly to how my performance affected the client. By making sure I had a clear understanding of our goals, and by being so clear about judging my work, she encouraged me to look for more creative ways of meeting our goals. She made me feel like both a technician and a partner in our quest to save our client money. Working with Maggie was a challenge because of the high standards she set, and a pleasure because the standards were clear, and she made sure I had the tools I needed to meet them. I can honestly say that I grew as a technician under her leadership.”

Maggie was not a technical lead, but a PM. In that role, she communicated values (save the client money, show each step of the work) very clearly, and she demanded quality. I had never worked with SQL Server before working with Maggie, but in six months I got two years of experience. When I created an automated Excel spreadsheet to retrieve data and perform the formatting we were doing by hand, she was very free with her praise.

What Happens to Software Under Arbitrary Deadlines?

There are four basic engineering aspects to developing a software system: Requirements, Test, Design, and Implementation. When a project is squeezed for time, it is the Design and Test aspects that get squeezed to make room for Implementation. Design activities are in effect frozen, which means that the design of the system will stay static even as new features are added. Testing simply stops, with all that that implies.

As implementation continues, new requirements are uncovered or areas of existing requirements are discovered to need modification or clarification. All of these factors imply necessary changes to the design of the system, but since design is frozen, the necessary changes don’t happen. Since testing isn’t happening, the full ramifications of this fact are not immediately felt.

The only remaining implementation choice is to squeeze square pegs of new features into the round holes of the existing design. This option requires more time to implement because time is spent creating overhead to deal with the dissonance between the design and the requirements. The resulting code is harder to understand because of the extra overhead. This option is more prone to error because more code exists than should be necessary to accomplish the task. Every additional line of code is a new opportunity for a bug.

All of these factors results in code that is harder to maintain over time. New features are difficult to implement because they will require the same kind of glue-code to marry a poor design to the new requirements. Further, each new feature deepens the dependency of the system on the poor design, making it harder to correct, and making it ever-easier to continue throwing bad code after bad. When considered in isolation, it will always seem cheaper to just add crap-code to get the new feature in rather than correct the design, but cumulatively, it costs much more. Eventually it will kill the project.

As bad as all this is, the problems don’t stop there. As long as the ill-designed code continues to exist in the system it serves to undermine the existing and all future features in two ways. 1) It becomes a pain point around which ever more glue-code will have to be written as the interaction of the ill-designed feature with the rest of the system changes. 2) it acts as a precedent in the code-base, demonstrating that low-quality code is acceptable so long as the developer can find a reason to rationalize it.

Previous Page · Next Page