Posted in:

Thanks to a tweet from Bob Martin, I stumbled across a fascinating talk by Sarah Mei, entitled “Is Your Code Too SOLID”? In the talk she distinguishes between the concepts of “strategy” and “tactics”, saying that although the “SOLID principles” are a good “strategy” for making our codebases more maintainable (i.e. if our code is SOLID then it is easy to change), they don’t provide us concrete “tactics” for how to actually implement that strategy.

In other words, what practical advice can we give developers to enable them to write SOLID code, or (more to the point) move existing code in the right direction?

In response to this question, Sarah offers a offers an acronym of her own “STABLE” – giving six practical tactics for helping developers implement the strategy.

I must confess that I tend to have low expectations of acronyms. They tend to suffer from awkwardly named points, redundancy or key omissions. But as she worked through each of the six points, I was really impressed with how well it hangs together, and to be honest, I was left wondering why this acronym hasn’t caught on (at least in the circles I move in). It certainly deserves to be more widely known as it provides a very practical and concrete set of talking points for teams looking for ways to improve their codebase.

So let’s look at each of the six points, and I’ll give my own take on them.

S = Smell your code

The first tactic is simply to learn to identify “code smells” in your code. Your team needs to be able to identify what’s wrong with a class or method, and have a common vocabulary for expressing it. This might require a regular “lunch and learn” session where different code smells are discussed, with explanations of why such code causes problems.

What I love about this tactic is how modest in its goals it is. It doesn’t ask us to fix anything (yet), it simply asks us to learn to see problems. If a developer gets a better sense of (code) smell, not only are they able to spot problems with existing code, but will also hopefully stop in their tracks when they realise they are introducing the same smells into new code they write.

T = Tiny problems first

The second tactic is to tackle “tiny” problems first. Once you’ve identified a bunch of code smells, some of them may require wholesale wide-ranging rework of multiple classes to resolve. Whilst there is a time and a place for doing that, it can often resulting in inadvertently breaking working code, and give a bad name to any attempts at “refactoring” the code in the future.

The “tiny problems first” tactic encourages us to start with the simplest changes that move the codebase in the right direction. That might just be giving a variable a meaningful name, or extracting a block of code into a well-named function. Again, I love how modest this tactic is: it gives every developer, no matter how junior they are, a realistic path towards improving the quality of the overall codebase.

Obviously at some point we will need to address some of the more deep-rooted problems in our system. Sarah points out that you can “see the large problems better by clearing away the small problems obscuring them”.  But there’s usually something else that needs to be done before we can tackle the larger problems, and that’s where the third tactic comes in …

A = Augment your tests

The idea behind “refactoring” is that you improve the structure or design of existing code without modifying its behaviour. A good suite of unit tests gives you the freedom to do this with confidence, knowing that if all the tests pass after the refactoring, you’ve not broken anything

The reality in many software projects is unfortunately a long way from this ideal. If your automated tests (unit or integration) only cover a small portion of the functionality, then any kind of restructuring of the code is inherently risky. It means you need to perform costly manual testing every time you change anything.

So this tactic is about adding in additional tests that will provide a safety net for the changes you need to perform. Sarah suggests focusing on adding integration tests at one level higher than the class you’re working on. She says “test behaviour, not implementation. That’s why you go one level up. You need tests that describe the behaviour you want to keep”. Often in a legacy codebase there are far too many fragile tests that are tightly coupled to implementation details.

Again, I like this tactic because it is realistic and achievable – we all ought to be able to find time to add at least one test to the code we’re working on, and if we can keep the focus of those tests on behaviour rather than implementation, they will provide a fantastic safety net for us to address the larger problems in our code.

B = Back up (when it’s useful)

This tactic states that “when the code has an abstraction in it that is no longer serving you well, sometimes the most useful thing to do is to ‘rewind’ the code into a more procedural state, put all the duplication back, and start again”.

This is the boldest of the tactics so far, and may feel like a backwards step, but I think it’s very helpful to at least put this on the table as one of the options at our disposal. As Sarah points out we can easily get caught by the sunk cost fallacy. “Don’t forge ahead with a set of objects that don’t even fit now, let alone in the future”.

By clearing these poorly conceived abstractions from our codebase, we leave ourselves space to view the problem from a fresh perspective and come up with new abstractions that better fit the needs of our business requirements. Remember, we tend to make a lot of our architectural decisions at the start of a project, which is actually when we have the least understanding of what the system needs to do. So it shouldn’t surprise us if we took some wrong steps along the way, and we shouldn’t be afraid to say “we got this wrong, let’s undo it”.

L = Leave it better than you found it

The fifth tactic is often known as the “boy scout rule”, with the idea that you leave the campsite in a better state than you found it. Applied to code, it means that every method I work on, I’ll attempt to make minor improvements to, often by making small refactorings like renaming things.

Now, this tactic at first seemed to me to be a restatement of tactic two (“Tiny Problems First”). Perhaps like many acronyms, STABLE suffers from a bit of redundancy to create a contrived word out of the points.

But on reflection, I think there are two separate questions being answered here.

Tactic two asks “what order should I tackle problems in?”, and answers, “solve the tiny problems first”.

Tactic five asks “when should I tackle these problems?” and answers, “do it when you’re already working on that area of code”.

I often tell my teams that the best time to make improvements to a class or method is when you’re actively working on that code, maybe fixing a bug or adding a new feature. You’ve probably spent a lot of time reading and understanding the code. You have a good grasp on how it currently works and what it does. You probably also have some opinions on how the code could be improved (in other words you’ve already done tactic 1 – you’ve smelt the code and not appreciated its fragrance).

The temptation at this point is to simply write an email having a “rant” about how bad this code is, and say that “we should plan to rewrite it in the future”. Now of course you likely don’t have time to fix everything, but even a small investment of additional time before you move onto the next thing would allow you to fix some tiny problems (tactic 2), augment the tests (tactic 3) and leave the code better than you found it (tactic 5).

E = Expect good reasons

The video leaves us on a bit of a cliff-hanger here. The audio breaks off at this point, and so although we can see point 5, we are left guessing what “expect good reasons” might mean! Thankfully, Sarah’s slides are available on slidedeck, and contains a full transcript.

Tactic six asks us to “assume past developers had good reasons to write the code they did”. This complements tactic one (just as 2 and 5 do, giving these tactics a neat chiastic structure). Often when we smell problems in the existing code, our initial reaction is to criticise the original developer. “What incompetence!” “What were they thinking?“

But as I argued in my technical debt course on Pluralsight, “the blame game” is counter-productive and can result in a toxic atmosphere. As a team we should be taking collective responsibility for the quality of our code and focusing on how we can move in the right direction, rather than recriminating about how we got in this mess.

We need to start from the assumption that all the developers on the team are genuinely trying their best, and if the code they produce is falling short, it highlights the need for training, code reviews, mentoring and pair programming to help the team move towards a shared understanding of the sort of code we want to write going forwards.

No need to stop the world

Sarah finishes her talk by pointing out that these tactics allow you to make real progress over time without having to perform a “stop the world” refactoring, where feature development has to stop in order to sort out the mess in the code. This is important, as the business can very quickly lose patience with being asked to put feature development on hold so you can repay “technical debt”.

So thank you Sarah for this insightful talk and very helpful set of tactics. I’m actually planning to present an updated version of my “technical debt” talk to some user groups over the next few months (let me know if you’re in the South of England and would like me to visit your group). In the talk I place a strong emphasis on “practical techniques for repaying technical debt”, and the STABLE tactics provide a fresh perspective which I look forward to sharing both with user groups and the developers I work with.

Want to learn more about the problem of technical debt and how you can reduce it? Be sure to check out my Pluralsight course Understanding and Eliminating Technical Debt.