The Cost of Complexity and Coupling
Five weeks ago I was given a task to add some custom video playback to a large C# application. The video format in question is a proprietary codec wrapped in a custom file format and can only be played back using a third party ActiveX control. There was no documentation for the codec, the file format or the ActiveX control. We did, however, have a working application that could play back these video. All I needed to do was extract the bits we needed as a reusable component.
Sounds simple, right? The problem was that the working application was huge and did far more than simply playing back video. There were dependencies on over 100 .NET assemblies plus a few COM objects thrown in for good measure. The Visual Studio solution itself contained 20 projects. My task was to understand how it all worked, so I could reuse the bits we needed.
Approach 1 - Assembly Reuse
My first approach was to identify the classes that we could reuse and copy their assemblies to my test application and use them as dependencies. This seemed a good idea, until I found that tight coupling resulted in me copying the majority of the 100 dependent assemblies, and required me to construct instances of classes that really were not closely related to what I was trying to do. I spent a long time (probably too long) trying to persevere with this approach until I finally admitted defeat.
Approach 2 - Refactor
My second approach was to take a copy of the source for the entire application and gradually delete all the stuff that wasn't needed. Eventually I would end up with just the bits that were necessary for my needs. This approach was a dead end. First there was far too much to delete, and second, the tight coupling meant that removal of one class would cause all kinds of others to break, with cascading side-effects until I inadvertently broke the core functionality.
Approach 3 - Code Reuse
My third approach was to simply copy the source code of the classes that seemed to be needed over to my test project. That way, I could edit the code and comment out properties or methods that introduced unnecessary dependencies. Even so, the tight coupling still caused a lot of junk to be moved across as well. In the end my test application contained about 200 classes in total. And when I finally finished putting all the pieces together, it didn't work. Four weeks gone, and no closer to my reusable component.
Approach 4 - DIY
My final approach was to go to the lowest level possible. I would use a hex editor and really understand how the file format worked. And I would examine the interface of the ActiveX control and call it directly rather than through several layers of wrapper classes. This meant I would need to re-implement a lot of low-level features myself, including the complex task of synchronization and seeking. The existing application, which I was getting to know quite well by this point, would simply act as a reference. It was another week of work before I got the breakthrough I was looking for. Finally, five weeks after starting, I could display video in a window in my test application.
Technical Debt
Why am I telling this story? I think it demonstrates the concept of technical debt. The working application did what it was supposed to and was bug-free (at least in terms of the feature I was interested in). But its complexity and coupling meant that I spent weeks trying to understand it and found it almost impossible to extract reusable components from it.
And this is exactly the meaning of technical debt. Sure you can do stuff the "quick way" rather than the right way, and it may allow you to meet project deadlines. But if the application is going to be developed further, sooner or later, you will have to pay for your shortcuts. And the way that usually plays out is that a future feature was supposed to take a couple of weeks to implement ends up taking several months.
What happens next is that the managers will hold an inquest. How on earth did this feature overrun by so much? Was the developer stupid or just lazy? Or is he useless at estimating timescales? And it sounds like passing the buck to blame architectural mistakes made by previous developers. After all, we're programmers. We're supposed to be able to understand complicated things and find a way to get new features in.
How Big is Your Debt?
Technical debt is real, it really costs companies lots of wasted time, and both developers and managers need to take it very seriously. But how can it be measured? If we could find some way of quantifying it, we could then observe whether we are accumulating debt, or slowly but surely paying it off.
For most of my development career there were only two metrics that counted: number of open bugs, and number of support cases raised. But these measure existing problems. Technical debt is a problem waiting to happen. Fortunately, there has been an increase in the number of tools that can put some numbers against the quality of source code and its overall design. I'll list a few of the ones I have tried or want to try here...
- Code complexity metrics (SourceMonitor)
- Coding standards adherence (FxCop)
- Static code analysis (StyleCop)
- Dependency and coupling metrics (NDepend)
- Code Coverage of Unit Tests (NCover)
I would be very interested to hear of any other suggestions or tools for measuring "technical debt".