Over the last few years, most December's I have attempted the amazing "Advent of Code" challenge. This daily set of puzzles is a great way to sharpen your coding skills or perhaps learn a new language. This year I sadly didn't have the time to participate although I did help out a few friends with some of the challenges.
One of the interesting questions that Advent of Code raises about coding in general, "is what we should be trying to optimize for"? Is it purely about completing the challenge in as short a time as possible? Or should we try to complete the challenge in the fewest lines of code? Maybe the best solution is the one that solves the problem the fastest, or the one that has the most elegant design? Or perhaps it is the solution that is most extensible or reusable?
Clearly, the criteria you consider most important will have a big impact on how you tackle a programming task. And very often these goals compete with each other, making it important to be clear in your mind about which is your top priority.
In this post, let's briefly consider several possible goals that we might attempt to optimize our development process for.
1. Speed of coding
Although Advent of Code is an artificial challenge, where you're competing against other coders to gain a place on the global leaderboard (something I only achieved once as you have to get up very early even to stand a chance!), the pressure to get a coding task completed as quickly as possible can be very strong in a business context.
Clearly, getting code written as quickly as possible is very valuable to a business. It means you can be quick to market and move on to tackle the next priority. The big downside is that prioritizing speed above all else is a recipe for introducing Technical Debt, which can dramatically slow down future development.
For this reason, getting things done quickly should not be considered the highest priority for development tasks (with the possible exception of fixing a critical bug in production).
2. Terseness of code
You might have heard of "code golf" which is a challenge some coders enjoy where they try to solve a given problem in the fewest characters. The solutions that expert code golfers come up with are very often incredibly impressive and compact compared to the way that a regular coder might attempt the same task. But the trade-off is often readability - despite being a small amount of code, it can be almost incomprehensible to other developers.
3. Readability (clean code)
At the opposite end of the spectrum is the desire to maximize "readability" (sometimes referred to as "clean code"). With clean code, you make it your goal that the next developer to read the code (which might be you in a few years time) has the best possible chance of understanding it.
This means ensuring that methods and variables are named well, comments are provided where they can add important context, and that methods are kept short and simple.
Generally speaking, I am a big fan of aiming for clean code, although I have seen it backfire, with developers writing dozens of simple "clean" methods spread across multiple classes, for something that would arguably have been easier to read had it been kept together in a single method.
4. Speed of execution (performance)
The Advent of Code challenges often include a performance element to them. Often part 1 of the daily challenge can be solved with a naive brute force method, but part 2 requires you to optimize your algorithm in order to solve it in a reasonable amount of time.
This mirrors something many developers have experienced, where the small amount of test data you used to check your work performs perfectly adequately, but in the real world as the load increases, an algorithm that might have worked just fine for 100 items, becomes unusably slow when there are 10,000.
Of course, the best way to approach performance is to actually measure it. Often as developers we assume we can correctly guess what the performance of our code will be, or where the bottleneck is, or what needs to be done to improve it. In reality this often leads to pointless over-complexity optimizing something that didn't need it, as well as completely missing were the real performance issue lies.
I've noticed that its often several years into the development of a large enterprise system before performance becomes a major focus. In the early stages, your focus is on gaining market share, and your number of end users may be small. But once you have become an established market leader, the focus switches to being as profitable as possible, which performance optimization can help with by driving down hosting costs. Also the more customers you have, the more performance bottlenecks you are likely to have to find and deal with.
Each Advent of Code daily challenge has two parts. Part 1 introduces the problem, but part 2 changes the requirements in some way. This rewards you for solving part 1 in an extensible way. If you did a good job of it, solving part 2 is often straightforward.
However, there's a catch. You don't know in what way the problem will change for part 2. Often I find myself trying to predict what that will be, and that's great if you guess right. But just like the real world, requirements often change in unexpected ways. And the extensibility you built in to part 1 can end up being a pointless waste of effort.
This is one of the hardest balancing acts to get right in software development. Often you do have a good idea of the future direction that your code is likely to need to be extended in. So it makes a lot of sense to factor that into your design. But beware of the pitfalls - its possible that extensibility adds unnecessary complexity to your code and isn't sufficient to meet future requirements anyway.
A closely related concern is reusability, and again Advent of Code often rewards you for making generic helper classes that can be applied to multiple puzzles. I found this with commonly needed classes like 2D or 3D grids, and helpers for managing coordinates.
By building up a library of genuinely useful utilities, you can greatly speed up future development. You'll also benefit from the code that consumes those utilities being more readable, as the implementation details are hidden away.
And in most enterprise projects I've worked on, its not uncommon for a set of "utils" to slowly grow with all kinds of helpful classes. There are some pitfalls though, and one is that these libraries of utilities whilst created with the best of intentions end up getting under-used because other developers are not aware of their existence, or because they have some baked-in opinions that turn out to make them not as general purpose as they need to be. Or they become so customizable (and therefore more complex to use) that its easier to just implement what you need yourself.
So I would beware the temptation of trying to turn everything you write into some kind of generic reusable class. I generally wait until I genuinely need to use the same class in at least two other places before it gets promoted into being part of a utilities library.
7. Test Coverage
One of the nice things about the Advent of Code challenges is that you are given test input for each puzzle that you can use to validate your solution before attempting it on the real input. This encourages you to take a "test-driven" approach where you write your tests first, before writing the code, resulting in a test suite that covers 100% of your code. Although at first this approach seems like it would take longer, it can actually be a huge timesaver in terms of the number of bugs it can catch and the confidence it gives you to refactor your code.
Taking a test-driven approach and aiming for 100% code coverage is something that would benefit every software project. Unfortunately, it is possible to do it badly. Its not uncommon to see overly complicated unit tests filled with hundreds of lines of setup of mocks, which results in fragile tests that offer very little value but require a lot of maintenance. And while I think it should always be possible to write code that is testable without compromising its design, I have seen cases where the accusation of "test-induced design damage" is valid.
8. Completeness (handling every possible edge case)
The way that the Advent of Code challenges are written, you get your own personalized input that you have to solve for. This means that there might be some edge cases that might potentially be in your input data, but actually aren't there. Should you create a complete solution that would work for every possible input, or is it OK to make a custom solver that works for your specific input?
The same challenge exists in a lot of real-world coding scenarios. We are often receiving input data from external systems, and theoretically could receive anything. But writing code that can cope with every possible input often requires us to expend a lot of time coding for and testing scenarios that may never actually occur in the real world.
This is one of the reasons it is extremely important that developers and testers have as much insight as possible to the real-world data that is used in production (obviously security concerns may mean this is not always accessible). That way testing and development effort can be focused on what is actually going to be received in the real world, rather than on what is theoretically possible.
Although we've looked at multiple competing concerns that the Advent of Code challenges make us think about, there are even more that crop up in business applications. Cross-cutting concerns like security and observability also need to be factored into everything you develop, and can sometimes have a big impact on the design, performance and overall time to market.
Which is most important?
It's worth recognizing that (with the exception of terseness of code), all of the goals I've mentioned above are worth pursuing. And it's also worth recognizing that it is impossible to give them all top priority.
For any given development task, you ought to be clear about which of these criteria are considered most important. I'm not going to attempt to prioritize the list other than to say that normally getting good test coverage and writing readable code are going to be much higher up the list than simply getting things done as soon as possible.
But I'd love to hear your thoughts on this subject, so feel free to add a comment and let me know what you think is most important.