Conventions often function as a substitute for failure to grasp design principles. For example, I’ve seen teams get up in arms about using Hungarian Notation but then write meaningless function names like “DoIt()” or “Process()”, or meaningless variable names like “intVar” or “intX”. These same teams often have little or no understanding of design principles , patterns, or concepts. I have met so many developers that have never heard of YAGNI, LRM, SRP, LSP, Dependency Injection. How many developers do you know that have heard of patterns such as Singleton, Adapter, Decorator, Factory, Monostate, Proxy, Composite, or Command?
The use of conventions is often invoked as a means of aiding code clarity. And it does, but it is not the only or even the best means of attaining that goal. We need to learn to name classes, functions, and variables in meaningful ways; to write small functions that do one tiny little thing, and do it well; to write small single-purpose classes. All of the conventions in the world will not help you maintain a code-base if developers are free to write 3000-line methods with 27 layers of nested-conditionals and loops (and yes, I have seen this). Read “Clean Code” by Uncle Bob.
Design patterns, techniques, and processes are much harder to learn and master than a concrete list of do’s and don’ts (which is what most conventions amount to). Good software design is like playing a musical instrument–it requires dedication, repetition, and practice to learn and master.
Conventions do have a place. However, conventions are usually suggested by the provider of the tools you are using. For example, Microsoft has a conventions document for the .NET Framework which is applicable to all .NET Languages. (Thankfully, they eschew Hungarian Notation). It is true that this is worth learning simply in order to be a good developer citizen. My recommendation would be to simply use the conventions established by the tool-provider, or the dominant conventions within the community around the tool. Don’t waste company time and money adding your own tweaks and changes to the conventions document. Speaking personally, I would much rather maintain a code-base that is written cleanly and with good design and that violates every convention you could name.
Conventions are not a bad thing. The problem with conventions is that they are so often discussed in place of design principles. Conventions without design principles aid nothing. Without proper focus on good design, conventions can even hurt software quality because they give developers and managers the illusion of being thoughtful and disciplined about what they are doing.
Conventions can be useful in another way. There is a good bit of discussion going on right now about Convention over Configuration. CoC is a design short-cut, and consists of making decisions about how components of a software system interact by default, and only adding additional documentation when components deviate from the norm. CoC actually bolsters my point here, because it only tends to arise as a discussion point in systems that are already using good design patterns and practices.
In my experience, I have seen conventions provide more problems than benefit. The temptation, as Chris mentioned, is to put too much focus on them. Many conventions are obsolete with today’s strongly-typed languages like Java, C#, and VB.net. Yet many developers still cling to them. I’ll put my two cents in about Hungarian Notation since Chris mentioned that. Someone who strictly uses Hungarian Notation with no regard for meaningful names might name a variable intD. In most languages, being able to tell the data type of a variable is trivial, so the “int” part is pretty much useless. The “D” may have meant something to the original developer, but won’t mean anything to anyone else. Why not name it elapsedTimeInDays? When I name variables, I don’t think of any convention. I try to think of a name that can convey meaning even to a non-programmer.