Category Archives: Uncategorized
Seattle Code Camp 2014

I gave 2 presentations at Seattle Code Camp today.

The first was a talk about our internship program. I’m still trying to start the conversation on this one. So far it appears no one is talking about this topic. My powerpoint is here: Scaling Craftsmanship Through Apprenticeship. It’s not much more than just a memory-jog for me, but they were asked for so I’m posting them.

The second talk was about Unit Testing Your Javascript. This one sort of went sideways when Chrome refused to load my demo site. That was a challenge! Still, I think it went off okay. The unfortunate reality is that unit testing in Javascript is still pretty hard.  It still feels like a lot of duct tape and baling work to make it work. That said, I’ve put together a demo application that shows how we do it at work.

Thanks to everyone who attended my presentations. I hope you enjoyed them!

Review of “Guitar Practiced Perfectly” Software

Overall

Guitar Practiced Perfectly is a piece of software that helps a guitarist manage and plan practice routines. It includes 300 or so practice routines out of the box. Routines can be organized into sessions. Sessions can be organized arbitrarily, but the software supports daily sessions. It’s written in Adobe Air so it will run on Mac and PC. I’m running on Windows 8 64-bit.

I’ve only used the software for one practice session so far. The overall effort is very good. As far as I know, it’s the only software of its kind. At $50 US, it’s a little pricey to buy without a trial.

image

 

Missing Features

  • I should be able to create my own routines. This is a huge gap in the software. For example, I’d like a version of the C major pentatonic that includes the minor 3rd and the diminished 7th. Or I’d like a routine in which I play a scale through ascending keys.
  • When I drag/drop an element between tree elements, I should be able to choose "Copy" or "Move."
  • In addition to being able to create a new routine, I’d like to be able to clone an existing routine and modify it.

Bugs

  • I’ve noticed that if I’m playing an exercise and change the tempo then it stops playing sound. I have to close/reopen the software to get it to play sound again.
  • Drag/Drop a routine within a session doesn’t work. This makes changing the order of the routines hard.

Usability Problems

  • The use of accordion style controls for the session menus makes it hard to understand that you can drag/drop exercises between accordion tabs. In general, accordions should not be used when data is shared across panels.
  • The fact that the screen elements are static is irritating. It would be nicer if I could move things around. The best UI I’ve seen for this sort of thing is in Microsoft Visual Studio. Each panel is draggable and dockable on its own. I don’t know if Adobe Air gives you this kind of flexibility, but it would be nice, and it would make the drag/drop operations easier to manage as "Session by Weekday" could be docked to a different screen area than "Session by Skill Level."
  • The user should be able to drag/drop more than one exercise at a time. I found this annoying when I was trying to drag all of a certain category of exercise to my Sunday routine. I had to do them one at a time.
  • The labels that control Tempo, Lead In, Repeat, etc react to the mouse as if they were buttons. Clicking them doesn’t do anything. This is confusing. They should either just be labels, or open some kind of advanced editing screen.
  • Having to choose between Music and Metronome is painful. They should have independent volume controls.The existing either-or functionality forces me to into a tricky volume balancing act with my amp.
  • Help->About menu should include the software version. The fact that the software version is missing from the UI makes it hard to tell if you are running the latest.
  • The main window should include a standard control box for changing screen size, minimize, and maximize functionality.
  • I think it should be impossible to delete system-defined routines and sessions. This is scary functionality as I could get rid of something very useful.

Overall

I like Guitar Practiced Perfectly very much and I’ll get a lot of use out of it. However, it’s not the software I was hoping it was. The lack of routine-creation and editing functionality means it misses the mark by quite a large margin. Some of the UI constraints make working with sessions very hard. For an intermediate player such as myself, it will definitely help me take my playing to the next level. However, I’ll not be able to use it to integrate my teacher’s lessons into my daily practice.

Final Score

70%

Practical advice for observing the LSP and DIP: Use the Most AbstractType Possible

This post is about concretizing the relationship between two abstract design principles and providing a piece of practical advice

The goal of the Liskov Substitution Principle is to make sure that subclasses behave as their superclass parents. The “Is-A” relationship is superceeded by the “Behaves-As-A” relationship. By observing the LSP we enable looser coupling by not making clients of the superclass catalog knowledge of the subclasses.

Implicit in the discussion around LSP is that you are actually consuming the abstract type instead of the concrete types. From the wikipedia entry on the Dependency Inversion Principle:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.

B. Abstractions should not depend upon details. Details should depend upon abstractions.

Basically, you want the clients of your code to depend on the most abstract possible type. So, as a simple heuristic when you’re writing methods, try using the most abstract type that the clients of your code can use without type casting or reflection. If you’re using ReSharper (which I highly recommend) you’ll get these suggestions automatically–but this is an easy heuristic to apply when you’re writing the method in the first place.

In general, favor interfaces over base classes and base classes over sub-classes (Interfaces are a much weaker coupling than inheritance).

Happy Coding!

VS Powershell Session

Add this script to your powershell profile.  If you don’t know where your powershell profile is, open a powershell session and type $profile and press <Enter>.  In Windows 7, you can run powershell from the current folder by typing powershell in the address bar of windows explorer.

	#Set environment variables for Visual Studio Command Prompt
	$vspath = (get-childitem env:VS100COMNTOOLS).Value
	$vsbatchfile = "vsvars32.bat";
	$vsfullpath = [System.IO.Path]::Combine($vspath, $vsbatchfile);

	#$_ shortcut represents arguments
	pushd $vspath
	cmd /c $vsfullpath + "&set" |
	foreach {
	  if ($_ -match “=”) {
		$v = $_.split(“=”);
		set-item -force -path "ENV:\$($v[0])"  -value "$($v[1])"
	  }
	}
	popd
	write-host "Visual Studio 2010 Command Prompt variables set." -ForegroundColor Red
Quick Update

I’ve been Internet-less for a few days and it’s been killing me. Internet is like coffee—it makes the world go round!

I’ve made a few updates to Yodelay that I wanted to tell you about.

First, I added an ASP .NET MVC project. I wanted to see if I could use the MVVM pattern in ASP .NET MVC. I’m not completely happy with my implementation, and the UI is kind of rough, but it works. I’ll work on cleaning it up later.

Second, I added a library for non-attribute-based validation. My problem with validation frameworks that rely on attributes is that I don’t always have access to the code for the classes I need to create business rules for. The new library uses a fluent API to configure rules for classes and properties.

Example:

ConfigureRules.ForType<BusinessObject>()
    .If(e => e.Id > 0)
        .Property(e => e.Name)
        .Must.Not().BeNull();
var testObject = new BusinessObject() {Id = 1, Name = "Testing" };

Rules.Enforce(testObject);


Third, I added some extension methods for the Range class which allow the developer to test for adjacent, intersecting, and disjoint ranges. Further, the range API will now find gaps in lists of ranges.

Finally, I removed the assembly signing. When I added the key files before, I password-protected them. This makes it hard for people who download the source code to compile it. I’ve removed all assembly-signing for the short term. When I”m ready to build and installation package, I’ll resign the files without password protection.

The Presentation

So I spoke at GSP Developers tonight on TDD. It went well, though I was a bit nervous. I got through the example I chose with less code and fewer tests than I had with any previous attempt, though I still took about the same amount of time. I enjoyed the questions from the audience, and the event organizer made some great observations as well. I think I might enjoy doing something more long-form, such as a mini code-camp for teaching TDD.

Software Internships Won’t Help

Editors Note: I have since changed my mind on this post as I have developed and run a successful internship program for many years now. However, I will preserve this as-is for posterity.


Read this post from Uncle Bob, and be sure to watch the video—that’s the scary part.

https://sites.google.com/site/unclebobconsultingllc/blogs-by-robert-martin/developer-certification-wtf

I take issue with Uncle Bob’s idea of internship. Having Senior Developers mentor interns can create a culture of inbred, non-innovative practices. The temptation is to think that you can teach smart development practices. All you can teach is a litany of concretes. The junior developer has to turn their brain on to grasp the principles involved. In short, I don’t think Uncle Bob’s solution is that new, or that it would solve any of the problems he describes.

I think the problem is deeper than just lack of skill or training. What software development needs is a revolution in values. We as developers need to value high-quality, unit-tested, organized, clean code. We need to value the drive to excellence, not just the willingness but the desire to learn, and make room for innovation. Software development managers need to refuse to tolerate anything less. We need to stop sacrificing design and test to the altar of the arbitrary dead-line. Fortunately, we don’t need to start that revolution as it is already under way—Uncle Bob’s post being a recent shot fired in that war.

Update:

The link to Uncle Bob’s original url moved. I updated it.

A Good Project Manager

I was interested to read Roy Osherove’s account of his worst team leaders recently. Jason Crawford writes about what he thinks makes good team managers. They are not talking about the same role I think.

Roy is talking about a technical lead on a team of developers and his basic problem is the technical lead’s perceived lack of or interest in technical ability. His criticisms fall into basically two categories: training, and judgment. He wants a technical lead to help make him a better developer by judging it. A lead that refuses to judge is no lead at all.

Jason Crawford draws portraits of three kinds of managers, but the best, he says, focus on communicating values. He is not suggesting that a good project manager should moralize to his employees, but rather that the PM should have a clear idea of the values of the company and ensure that the work his reports do conforms to those values. If we apply this to Roy’s technical lead, Roy’s technical lead should value technical ability to the point that he would be willing to point out mistakes and help the developer become better at his job.

I recently had occasion to write a recommendation for a former Project Manager. With permisssion, I’ll reprint the entire recommendation:

“Maggie Roberts was a great project manager. She was great at communicating with both technical and non-technical personnel. She knew enough about the technical work to describe the expected results as well as the goal she hoped to accomplish with the results, and then she got out of the way and let me deliver the results to her. When I had a better idea of how to get the desired results, she allowed me to pursue it.

My favorite thing about working with Maggie was her directness and the clarity of her expectations. She was never shy about indicating what was and was not good about the work I turned in. Her criticism was never cruel or directed toward me as a person, but targeted the work I turned in and its relationship to our client. She was not shy with her praise either. She had the same directness with pointing out great things I did as she did with errors. She always related both praise and criticism directly to how my performance affected the client. By making sure I had a clear understanding of our goals, and by being so clear about judging my work, she encouraged me to look for more creative ways of meeting our goals. She made me feel like both a technician and a partner in our quest to save our client money. Working with Maggie was a challenge because of the high standards she set, and a pleasure because the standards were clear, and she made sure I had the tools I needed to meet them. I can honestly say that I grew as a technician under her leadership.”

Maggie was not a technical lead, but a PM. In that role, she communicated values (save the client money, show each step of the work) very clearly, and she demanded quality. I had never worked with SQL Server before working with Maggie, but in six months I got two years of experience. When I created an automated Excel spreadsheet to retrieve data and perform the formatting we were doing by hand, she was very free with her praise.

What Happens to Software Under Arbitrary Deadlines?

There are four basic engineering aspects to developing a software system: Requirements, Test, Design, and Implementation. When a project is squeezed for time, it is the Design and Test aspects that get squeezed to make room for Implementation. Design activities are in effect frozen, which means that the design of the system will stay static even as new features are added. Testing simply stops, with all that that implies.

As implementation continues, new requirements are uncovered or areas of existing requirements are discovered to need modification or clarification. All of these factors imply necessary changes to the design of the system, but since design is frozen, the necessary changes don’t happen. Since testing isn’t happening, the full ramifications of this fact are not immediately felt.

The only remaining implementation choice is to squeeze square pegs of new features into the round holes of the existing design. This option requires more time to implement because time is spent creating overhead to deal with the dissonance between the design and the requirements. The resulting code is harder to understand because of the extra overhead. This option is more prone to error because more code exists than should be necessary to accomplish the task. Every additional line of code is a new opportunity for a bug.

All of these factors results in code that is harder to maintain over time. New features are difficult to implement because they will require the same kind of glue-code to marry a poor design to the new requirements. Further, each new feature deepens the dependency of the system on the poor design, making it harder to correct, and making it ever-easier to continue throwing bad code after bad. When considered in isolation, it will always seem cheaper to just add crap-code to get the new feature in rather than correct the design, but cumulatively, it costs much more. Eventually it will kill the project.

As bad as all this is, the problems don’t stop there. As long as the ill-designed code continues to exist in the system it serves to undermine the existing and all future features in two ways. 1) It becomes a pain point around which ever more glue-code will have to be written as the interaction of the ill-designed feature with the rest of the system changes. 2) it acts as a precedent in the code-base, demonstrating that low-quality code is acceptable so long as the developer can find a reason to rationalize it.

Software Design: Cognition and Design Principles

Software design is a subject fraught with disagreement among developers. The size and scope and importance of the subject demand serious attention from any developer considering himself more than just a hacker. What follows is my current understanding of software design as a subject.

Before we can properly deal with the subject of what good design is, we must first ask what is the purpose of design. Is it just to make the software work? Or is there something else. While the software must perform its basic function, I regard “working” as a second-order consideration. Functioning software is a necessary goal of design, but it is not sufficient to explain why we need design. Consider for a moment that many companies have working software that is poorly designed. Cearly, software does not necessarily have to be designed well, or even at all, in order for it to work—at least in the short term. The goal of design must be something else, something other than just the basic “does it work now?” question.

At this point one might be tempted to say that the purpose of design is “maintainability.” But what is meant by “maintainability?” What makes one software project maintainable, and another a disaster?

Consider that the computer does not care about the structure or organization of the code. The computer is not concerned with what language the software was written in, which patterns, techniques, processes, or tools were used in the construction of the software. To the computer, your program is just a sequential series of instructions: “Do x, Do y, If z Do a.” Why then should we concern ourselves with design principles, patterns, and practices? What are they for?

Software developers are often described as “writing code,” but we don’t normally think of ourselves as writers in the traditional sense. What if we did? When a writer sits down to write, the first questions s/he must answer is “who is my audience?” In deciding on an audience for his work, a writer constrains what is or is not acceptable in terms of the structure and content of his written work. These constraints are not arbitrary or optional: they are logically necessary implications of the choice of audience. If software developers are writers, then their work must also be constrained by their target audience. But if the target audience cannot be the computer, who is it?

It’s people. We do not write code for the computer. We write code for other people—for our co-workers, for ourselves, for any others that may have an interest in what the software is supposed to do at any time over the entire lifespan of the project. The purpose of software design is to communicate what the project does, and how it does it. Any set of software design principles and methods must be targeted at communicating to people. It must be constrained by the nature of the cognitive activity of the human mind.

Human Cognition

The human mind is only capable of dealing with 4 or 5 concretes at one time, and yet we are confronted with thousands of concrete objects, ideas, and choices that we must deal with on a daily basis. We must aggregate concretes into abstractions so that we can then treat the abstraction as a concrete for further thinking. For example, you may not be able to tell how many pipes this is must by glancing at it:

||||||||||||||||||||||||||||||

But if I broke it up into groups:

|||||    |||||    |||||    |||||    |||||    |||||

You should be able to tell that there are 6 groups of 5, ergo 30 pipes. The act of viewing the pipes in groups enables thinking. This is analogous to what we do when we perform abstraction. Abstraction is a process of integration and differentiation. We integrate like members of a class of existents, and differentiate them from the other members of the wider class to which they belong. For example, we can observe that certain objects are differentiated from other “media” by the fact that they have pages covered in text. “Book” as an abstract integrates all these objects under a common abstraction while distinguishing them from other kinds of media.

Well-designed software must be intelligible and discoverable and have an appropriate level of abstraction.

Intelligibility

Intelligibility means that even if the software does what it is supposed to do, it should make sense to the average software craftsman. I do NOT mean that we should “dumb-down” our code to the lowest common denominator. For any given field of endeavor there exists as general category of knowledge that can be expected of our target audience. Developers that do not bother to acquire that context have only themselves to blame for not being able to understand common software idioms. Suppose for example that your company has recently adopted an object-oriented language after years of writing procedural code, and the developers are not yet fully comfortable with concepts such as “classes” and “interfaces.” That does not mean that you should not introduce them to concepts such as design-patterns.

Just as we should avoid dumbing-down our code, we should also avoid “stupid code tricks.” A Stupid Code Trick is when a developer takes advantage of a subtle or little-known feature of a language to accomplish a goal that could be solved with a simpler code-construct. C is famous for this kind of code—some wise-guy developer will take advantage of the difference between “++i;” and “i++;,’” but in the context of a larger expression which consists primarily of pointer symbols, member signifiers, and parentheticals. Another Stupid Code Trick is to embed a long chain of function calls as arguments to a function. In this case, you end up with code that looks like:

// Stupid Code Trick
ProcessMyData(GetDataToProcess(GetMyIdentity(My.Session.User)), GetMethodToProcess(1, 12, 15), GetOutputFormat(OutputFormat.Default))

This should be re-written as:

var identity = GetMyIdentity(My.Session.User);
var method = GetMethodToProcess(1, 12, 15);
var format = GetOutputFormat(OutputFormat.Default);
var dataToProcess = GetDataToProcess(identity);
ProcessMyData(dataToProcess, method, format);

The temptation to perform a Stupid Code Trick seems to be rooted in the desire to reduce the number of lines of code. We must resist this temptation by remembering that our purpose is to be intelligible, not to reduce the number of lines of code.

Intelligible software should not confront the developer with too many concretes at once without a unifying abstraction. Let’s count the number of concretes in the above code:

var identity = GetMyIdentity(My.Session.User); // 3
var method = GetMethodToProcess(1, 12, 15); // 5 
var format = GetOutputFormat(OutputFormat.Default); // 3 
var dataToProcess = GetDataToProcess(identity); //4 
ProcessMyData(dataToProcess, method, format); // 4

I count one abstraction for the return result of the method, one for the method itself, and one more for each argument to the method. In the original version of this function, there were 20 concretes crammed on one line of code. By breaking up each function call into a separate call, we have a horizontal complexity of 3 to 4 on each line, and the entire algorithm is processed in 5 lines. Both horizontally and vertically the refactored code is within the ability of the mind to grasp at once.

This limit of 4 or 5 concretes means we should try to keep the number of arguments to functions as small as possible. the return result and method itself give us a concrete complexity of 2. anything over 3 arguments to a function and we are straining our ability to hold everything in our mind at once. The limit also means that we should keep methods as short as possible. It’s better to have many small and well-named functions than to have one massive function that does everything.

Discoverability

Another feature of the refactored code sample is its discoverability. Discoverability refers to the extent to which the code expresses its intent. I can make the code less discoverable by renaming its methods and arguments as follows:

var i = GetI(My.Session.User); // 3
var m = GetM(1, 12, 15); // 5
var f = GetOF(OutputFormat.Default); // 3
var d = GetData(i); //5
Process(d, m, f); // 4

I could make the code clearer by assigning named variables the three arguments to GetMethodToProcess();

const int numberOfCopies = 1;
const int leftMargin = 12;
const int topMargin = 15;

var identity = GetMyIdentity(My.Session.User); // 3
var method = GetMethodToProcess(numberOfCopies, leftMargin, topMargin); // 5
var format = GetOutputFormat(OutputFormat.Default); // 3
var dataToProcess = GetDataToProcess(identity, method, format); //5
ProcessMyData(dataToProcess, method, format); // 4

Discoverable functions describe what they do, accept only a few arguments, observe the Command-Query-Separation principle (CQS), and avoid temporal coupling.

// Temporal Coupling Example:
var myObject = this.GetBusinessObjectToBeValidated();
var validator = new Validator<MyBusinessObject>();
validator.Value = myObject;
var results = validator.GetValidationResults();

The 3rd line of the above example inhibits discoverability because nothing in the API suggests that you are required to set Value on the validator prior to calling GetValidationResults(). The above coupling could be resolved by either passing the object to be validated to the constructur, or (my preference) passing it as a parameter to the GetValidationResults() method.

// Temporal Coupling Example:
var myObject = this.GetBusinessObjectToBeValidated();
var validator = new Validator<MyBusinessObject>();
var results = validator.GetValidationResults(myObject);

It is now clear from the method signature of GetValidationResults that a business object is required in order to perform the operation.

Level of Abstraction

Good software design demands an appropriate level of abstraction. The cognitive principle is best expressed by Rand’s Razor: “[abstractions] are not to be multiplied beyond necessity—the corollary of which is: nor are they to be integrated in disregard of necessity.” Consider the following “Hello World” application:

var outputWriter = DiContainer.GetService(IOutputWriter);
var resourceService = DiContainer.GetService(IResourceServce);
var format = resourceService.GetDefaultResourceFormat();
var arguments = resourceService.GetDefaultResourceArguments();
var message = string.Format(format, arguments);
outputWriter.Write(message);

This application uses a Dependency Injection container to get an outputWriter and ResourceService. It then formats a message for display, then passes that message to the outputWriter. This application clearly requires a good bit of configuration in order to work properly. Gah! It violates Rand’s Razor in that it introduces an unnecessary level of abstraction into the application. This application would be better written as:

Console.WriteLine("Hello World!");

The above code might not be a Hello World application, and the level of abstraction expressed might be perfectly necessary in that context. Here is an example of the other side of the same problem. This interface integrates methods that are not cogent:

public interface ISuperClass
{
    void Login(User user);
    IEnumerable<FileInfo> GetFiles();
    TcpClient CreateTcpClient(string hostName, int port);
}

These methods have nothing to do with one another. Integrating them in an interface serves no purpose. The concomitant design principle of Rand’s Razor is the Single Responsibility Principle.

Previous Page · Next Page