Posted in:

There’s no such thing as a “best practice”. At least in software development. I’ve read countless articles on the “best practices” for database design, software architecture, deployment, API design, security, etc and it’s pretty clear that (a) no one can agree on what the best practices actually are, and (b) last year’s “best practice” frequently turns into this year’s “antipattern”.

I can however understand the motivation for wanting to define “best practices”. We know that there are a lot of pitfalls in programming. It’s frighteningly easy to shoot yourself in the foot. It makes sense to know in advance of starting out, how best to avoid making those mistakes. We’re also in such a rapidly moving industry, that we’re frequently stepping out into uncharted territory. Many of the tools, frameworks and technologies I’m using today I knew nothing about just five years ago. How can I be expected to know what the best way to use them is? I frequently find myself googling for “technology X best practices”.

So-called “best practices” emerge not by being definitively proved to be the “best” way of doing something, but simply by virtue of being better than another way. We tried approach A and it went horribly wrong after two weeks, so we tried approach B and got further. Now approach B is the “best practice”. But before long we’re in a real mess again, and now we’re declaring that approach C is the “best practice”.

A better name for “best practices” would be “better practices”. They emerge as a way of avoiding a particular pitfall. And because of this, it’s very unhelpful to present a set of “best practices” without also explaining what problem each practice is intended to protect us from.

When we understand what problem a particular best practice is attempting to save us from, it allows us to make an informed decision on whether or not that “best practice” is relevant in our case. Maybe the problem it protects us from is a performance issue at massive scale. That may not be something that needs to concern us on a project that will only ever deal with small amounts of data.

You might declare that a best practice is “create a nuget package for every shared assembly”. Or “only use immutable classes”. Or “no code must be written without writing a failing test for it first”. These might be excellent pieces of guidance that can greatly improve your software. But blindly followed without understanding the reasoning behind them, they could actually result in making our codebase worse.

Most “best practices” are effective in saving you from a particular type of problem. But often they simply trade off one type of problem for another. Consider a monolithic architecture versus a distributed architecture. Both present very different problems and challenges to overcome. You need to decide which problems you can live with, and which you want to avoid at all costs.

In summary, even though I once created a Pluralsight course with “best practices” in the title, I don’t really think “best practices” exist. At best they are a way of helping you avoid some common pitfalls. But don’t blindly apply them all. Understand what they are protecting you from, and you will be able to make an informed decision about whether they apply to your project. And you may even be able to come up with even “better practices” that meet the specific needs and constraints of your own project.

Comments

Comment by secfree

Great. So "Best Practice" should have a version number and release notes.

secfree
Comment by Mark Heath

that's a good way of thinking about it.

Mark Heath