Tag Archives: Design
Conventions vs. Design

Conventions often function as a substitute for failure to grasp design principles. For example, I’ve seen teams get up in arms about using Hungarian Notation but then write meaningless function names like “DoIt()” or “Process()”, or meaningless variable names like “intVar” or “intX”. These same teams often have little or no understanding of design principles , patterns, or concepts. I have met so many developers that have never heard of YAGNI, LRM, SRP, LSP, Dependency Injection. How many developers do you know that have heard of patterns such as Singleton, Adapter, Decorator, Factory, Monostate, Proxy, Composite, or Command?

The use of conventions is often invoked as a means of aiding code clarity. And it does, but it is not the only or even the best means of attaining that goal. We need to learn to name classes, functions, and variables in meaningful ways; to write small functions that do one tiny little thing, and do it well; to write small single-purpose classes. All of the conventions in the world will not help you maintain a code-base if developers are free to write 3000-line methods with 27 layers of nested-conditionals and loops (and yes, I have seen this). Read “Clean Code” by Uncle Bob.

Design patterns, techniques, and processes are much harder to learn and master than a concrete list of do’s and don’ts (which is what most conventions amount to). Good software design is like playing a musical instrument–it requires dedication, repetition, and practice to learn and master.

Conventions do have a place. However, conventions are usually suggested by the provider of the tools you are using. For example, Microsoft has a conventions document for the .NET Framework which is applicable to all .NET Languages. (Thankfully, they eschew Hungarian Notation). It is true that this is worth learning simply in order to be a good developer citizen. My recommendation would be to simply use the conventions established by the tool-provider, or the dominant conventions within the community around the tool. Don’t waste company time and money adding your own tweaks and changes to the conventions document. Speaking personally, I would much rather maintain a code-base that is written cleanly and with good design and that violates every convention you could name.

Conventions are not a bad thing. The problem with conventions is that they are so often discussed in place of design principles. Conventions without design principles aid nothing. Without proper focus on good design, conventions can even hurt software quality because they give developers and managers the illusion of being thoughtful and disciplined about what they are doing.

Conventions can be useful in another way. There is a good bit of discussion going on right now about Convention over Configuration. CoC is a design short-cut, and consists of making decisions about how components of a software system interact by default, and only adding additional documentation when components deviate from the norm. CoC actually bolsters my point here, because it only tends to arise as a discussion point in systems that are already using good design patterns and practices.

Posted on April 3, 2010, 12:15 pm By
2 comments Categories: Design Tags:
What Happens to Software Under Arbitrary Deadlines?

There are four basic engineering aspects to developing a software system: Requirements, Test, Design, and Implementation. When a project is squeezed for time, it is the Design and Test aspects that get squeezed to make room for Implementation. Design activities are in effect frozen, which means that the design of the system will stay static even as new features are added. Testing simply stops, with all that that implies.

As implementation continues, new requirements are uncovered or areas of existing requirements are discovered to need modification or clarification. All of these factors imply necessary changes to the design of the system, but since design is frozen, the necessary changes don’t happen. Since testing isn’t happening, the full ramifications of this fact are not immediately felt.

The only remaining implementation choice is to squeeze square pegs of new features into the round holes of the existing design. This option requires more time to implement because time is spent creating overhead to deal with the dissonance between the design and the requirements. The resulting code is harder to understand because of the extra overhead. This option is more prone to error because more code exists than should be necessary to accomplish the task. Every additional line of code is a new opportunity for a bug.

All of these factors results in code that is harder to maintain over time. New features are difficult to implement because they will require the same kind of glue-code to marry a poor design to the new requirements. Further, each new feature deepens the dependency of the system on the poor design, making it harder to correct, and making it ever-easier to continue throwing bad code after bad. When considered in isolation, it will always seem cheaper to just add crap-code to get the new feature in rather than correct the design, but cumulatively, it costs much more. Eventually it will kill the project.

As bad as all this is, the problems don’t stop there. As long as the ill-designed code continues to exist in the system it serves to undermine the existing and all future features in two ways. 1) It becomes a pain point around which ever more glue-code will have to be written as the interaction of the ill-designed feature with the rest of the system changes. 2) it acts as a precedent in the code-base, demonstrating that low-quality code is acceptable so long as the developer can find a reason to rationalize it.

Previous Page