0 Comments Posted in:

If you work on a commercially successful project, the chances are you've experienced the pain of technical debt. As a codebase grows larger and larger, inevitably you find that some of the choices made in the past are resulting in a slowdown in productivity.

It's a problem I've given a lot of thought to over the years. I created Pluralsight course on "Understanding and Eliminating Technical Debt" and often speak about it. In fact I'll be doing so again next month at Technorama Netherlands (would be great to see you there if you can make it!)

Tracking technical debt

One of my recommendations is that technical debt should be tracked. In my Pluralsight course, I suggested creating a "technical debt document". In this document, you list specific issues that need addressing, explain what problems they are currently causing, and what the proposed solution is. Other useful information includes estimating how long a fix would take, and identifying upcoming features in the roadmap that will become easier to implement once this technical debt item has been resolved.

Technical debt can come in many forms. I often break it down into categories like "Code debt", "Architectural debt", "Technological debt", "Test debt" etc, but the common theme is that there is something less than ideal about the code or tooling that needs to be addressed. By tracking these issues somewhere, you can prioritize and plan to address them.

Book recommendation: "Managing Technical Debt"

I recently discovered a great new book, Managing Technical Debt, written by Philippe Kruchten, Robert Nord and Ipek Ozkaya. The authors have researched technical debt at an academic level, (there is even a conference about technical debt now!) and so I was very eager to read what they had to say to pick up any new ideas.

Tech debt book

It's a great read, and written at a level that you could share it with both developers and managers alike. It's practical and pragmatic, and it was nice to see that they share a very similar understanding and approach to technical debt to the one I take in my Pluralsight course.

But there were several new insights, and the one I want to highlight in this post is what the best way to store "technical debt items" (as they call them) is.

Technical debt register

In the book, the authors recommend having a "technical debt register". It's similar to my document idea, but they recommend using your regular work tracking tools to store the "technical debt items". In other words, wherever you record your defects or backlog of new features (e.g. GitHub issues, Jira, Azure DevOps), should also be where you store technical debt items.

I recommend including debt actions (stories) in the same tracking tool as all other work and as part of a single program or team backlog. Our agile-inspired phrasing is "all work is work; all work goes on the backlog" Kruchten, Nord & Ozkaya, Managing Technical Debt

The reasoning for this is simple. Technical debt, just like defects and features, represents work that needs to be done. So it needs to be visible and able to be planned in. You could either create a new entity to represent a technical debt item, or just tag items with "TechDebt" to differentiate them.

This idea is one I had initially considered but rejected because I feared that it could be abused. What if the technical debt register filled up with thousands of "leftover" tasks that were in reality never going to get actioned? A kind of way to assuage our guilt that we didn't really finish our work. "Didn't get round to writing any unit tests? Don't worry! Just add it to the technical debt register!"

However, having read the book, I think I'm coming round to their way of thinking. I can see a number of benefits over using a document for tracking technical debt items:

  • they can be planned and estimated with the same tooling used for defects and features
  • you can associate commits to technical debt items
  • technical debt items can also be linked to featured and defects
  • they support discussion (not everyone will agree on the best way to address these issues)
  • they are easily added by anyone on the team (don't need to know where to find the document, or wait to check it out)

Two levels of technical debt items

The authors suggest that there are two levels of technical debt item: (1) simple, localized code-level debt and (2) wide-ranging, structural, architectural debt.

The simple items relate to a specific area of code and might take a day or so to address. A team could allocate a certain percentage of time each iteration to resolving these technical debt items.

Examples of wide-ranging items might be wanting to move from a monolith to microservices, or switching from server side rendering to a SPA framework. These are not necessarily achievable in one step, and need to be broken down into smaller chunks. In some cases, dedicating an entire iteration to working on one of these technical debt items might be warranted.

Automating creation of technical debt items

Another interesting idea is whether static code analysis tools could generate technical debt items. Personally, my fear here is that there would simply be too many of these, and many static code analysis tools generate high numbers of "false positives".

So personally although I find a lot of value in static code analysis tools, I wouldn't want to automatically convert each item discovered into a technical debt item. My preference is to use code analysis tools that give you immediate feedback as you are coding (this is one of the great strengths of ReSharper) as the most efficient time to address problems with code is while you are still working on it.

If a static analysis tool does highlight some particular areas of concern in the codebase, then you could always manually create an entry in the technical debt register that groups related instances together into a single item, rather than creating one entry per offending line of code.

Another good suggestion was to standardize on a way of marking technical debt issues in code. Many developers include comments like "TODO" or "FIXME", as a way to highlight improvements that are desirable in the future, but for whatever reason could not be done at the time. By adopting a standard, it's easy to find those items and generate technical debt items as necessary in the technical debt register.

Prioritizing technical debt items

Let's suppose we follow this guideline and create a technical debt register. Now we've got hundreds of items, big and small, each detailing a way in which our code should be improved to make our lives easier going forward. But how do we prioritise them?

Well, I would not recommend simply randomly picking off technical debt items to solve. Technical debt items are only potential problems - they're not actually causing any harm unless you are working on a specific part of your system.

So it's really important that your technical debt items are associated with a specific area of code. That way, when you are about to embark on new work in that area of code, you can review the technical debt register to see what known issues might get in your way. This means you can strategically address the ones that will most benefit upcoming work.

"Before starting a new feature or story, check the backlog to identify any known debt items that should be considered during implementation because they impact the same area of the code or would otherwise impede its development" (Kruchten, Nord & Ozkaya, Managing Technical Debt)

Numbers don't matter

It's also important to point out that it really doesn't matter if the number of technical debt items grows very large. Remember they are only potential issues, not actual problems, and it's totally fine if many of them are never addressed.

Of course, you might want to eventually prune some that have sat dormant for several years, or that have been obsoleted by other advances in the code. But they're not like defects. With defects, we typically want to get to a count of zero. Technical debt items are more akin to cool ideas for future features: they're not all going to get done - only the ones that bring real value.

Summary

I highly recommend having at least some way to track outstanding technical debt items for your projects, and reading the Managing Technical Debt book has convinced me to give tracking them in the standard project management tools a go. So I'm planning to migrate the issues listed in my existing technical debt document into our regular working tool and see if that helps us more effectively plan and prioritise which technical debt items should be addressed next.

Want to learn more about the problem of technical debt and how you can reduce it? Be sure to check out my Pluralsight course Understanding and Eliminating Technical Debt.

0 Comments Posted in:

Have you ever been burned by installing a beta or preview version of some developer tools that destabilised your development environment? It's certainly happened to me a few too many times over the years.

But what if you want to try out some of the new cool stuff in the pipeline such as C# 8 and .NET Core 3 (which is still in preview at the time of writing)? Is there any way of trying them out without installing the tools?

(Sidenote: in recent years, preview versions of .NET Core and Visual Studio have been very well behaved and not interfered with the non-preview versions which can be installed side by side. But still I tend to err on the side of caution and avoid installing preview tooling unless I absolutely need it)

Well, thanks to the awesome "Visual Studio Code Remote - Containers" extension, you have a very risk-free way of trying out the latest tooling for your language of preference, without needing anything more than Docker and VS Code installed.

In this post, we'll see how easy it is to set up a .NET Core 3 development environment in a container, and then develop in it using Visual Studio Code.

1. Pre-requisites

To follow along, you need Docker installed (I'm running Docker Desktop on Windows 10), and Visual Studio Code, with the "Visual Studio Code Remote - Containers" extension enabled. I also needed to go into my Docker Desktop settings dialog and enable "Folder Sharing" for my C drive. This is needed as your source code will be mounted as a volume in your container.

2. Create a project folder

Next create an empty folder (I called it core3container) and open Visual Studio Code in that folder (e.g. with code .).

3. Setup container configuration

Next we need to run the "Remote Containers: Add development container configuration files" command in VS Code. You can find this either by pressing F1 and searching for it, or by clicking the green Remote Window icon in the bottom left of VS Code.

This gives us a whole host of pre-defined container images in a variety of languages. I didn't see one for the .NET Core 3 preview in the list, so I just picked the "C# (.NET Core Latest)" option.

4. Open the folder in a container

This prompted me to reopen Visual Studio Code in a container, which I did. You can also do this on demand with the "Remote-Containers: Reopen folder in container" VS Code command.

The first time we do this, it builds the container image for us, which might take a little while as it could need to download base images you don't have.

We can use regular Docker commands such as docker image ls and docker ps to see what's going on behind the scenes. On my machine I can see that there is a new container with the name vsc-core3container-dfa84ec1259930dde9355646f1b8c6d2 running.

5. Examine the .devcontainer folder

Enabling remote container support for VS Code essentially means that a new folder called .devcontainer is created for you. This contains two files - devcontainer.json and Dockerfile.

devcontainer.json holds various configuration settings, such as the location of the Dockerfile, but also any VS Code extensions we want to be enabled when we're working in this container. This is an awesome feature. When you are connected to a container, you can have additional VS Code extensions enabled that just apply to development in that container. In our example, the C# VS Code extension is listed.

{
  "name": "C# (.NET Core Latest)",
  "dockerFile": "Dockerfile",
  "extensions": [
    "ms-vscode.csharp"
  ]
}

This configuration file can also be used for things like publishing ports which is useful if you're doing web development in a container. You can find the full reference documentation for devcontainer.json here.

The Dockerfile that got generated for us began with FROM mcr.microsoft.com/dotnet/core/sdk:latest but then also had some apt-get commands to install a few additional bits of software into the container, such as Git. Here's the Dockerfile that got created:

FROM mcr.microsoft.com/dotnet/core/sdk:3.0

# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive

# Or your actual UID, GID on Linux if not the default 1000
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID

# Configure apt and install packages
RUN apt-get update \
    && apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
    #
    # Verify git, process tools, lsb-release (common in install instructions for CLIs) installed
    && apt-get -y install git procps lsb-release \
    #
    # Create a non-root user to use if preferred - see https://aka.ms/vscode-remote/containers/non-root-user.
    && groupadd --gid $USER_GID $USERNAME \
    && useradd -s /bin/bash --uid $USER_UID --gid $USER_GID -m $USERNAME \
    # [Optional] Uncomment the next three lines to add sudo support
    # && apt-get install -y sudo \
    # && echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
    # && chmod 0440 /etc/sudoers.d/$USERNAME \
    #
    # Clean up
    && apt-get autoremove -y \
    && apt-get clean -y \
    && rm -rf /var/lib/apt/lists/*

# Switch back to dialog for any ad-hoc use of apt-get
ENV DEBIAN_FRONTEND=

If you want you can update this Dockerfile to install any additional tooling that your development process needs. We'll see how to update the Dockerfile shortly.

The great thing about the .devcontainer folder is that it can be checked into source control so that everyone who clones your repository can use VS Code to develop against it in a dontainer.

6. Run some commands in the container

With VS Code connected to the container, we can run commands directly against the container in the terminal window (accessible with CTRL + '). Let's see what version of .NET Core we have installed, with dotnet --version:

@68cec1b9578c:/workspaces/core3container# dotnet --version
2.2.401

That's not actually what I wanted. I want the preview of .NET Core 3, so I need to point at the correctly tagged version of the .NET SDK.

I can fix this quite easily though. I first run the "Remote-Containers: Reopen Folder Locally" command. Then I edit the Dockerfile to point to the 3.0 SDK base image:

FROM mcr.microsoft.com/dotnet/core/sdk:3.0

Then I run the "Remote-Containers: Rebuild Container" command. This rebuilds the container, and VS Code relaunches inside the new container. If we run dotnet --version again we can see that we now are running the preview version of .NET Core we needed.

[email protected]:/workspaces/core3container# dotnet  --version
3.0.100-preview8-013656

7. Try out C# 8 IAsyncEnumerable

Now in the VS Code terminal, we can run dotnet new console to create a new console app, which will generate a .csproj and Program.cs file for us.

And we'll edit Program.cs to make use of the cool new IAsyncEnumerable capabilities of C# 8. You can read more about it in this article by Christan Nagel.

I've updated my Program.cs with a very simple example of how we can use await and yield return to generate an IAsyncEnumerable<string>, and iterate through it with the new await foreach construct.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace core3container
{
    class Program
    {
        static async Task Main(string[] args)
        {
            Console.WriteLine("Starting");
            await foreach(var message in GetMessagesAsync())
            {
                Console.WriteLine(message);
            }
            Console.WriteLine("Finished");
        }

        static async IAsyncEnumerable<string> GetMessagesAsync()
        {
            for (int n = 0; n < 10; n++)
            {
                await Task.Delay(TimeSpan.FromSeconds(1));
                yield return $"Async message #{n+1}";
            }
        }
    }
}

We can easily check this is working with dotnet run in the terminal in VS Code, which will show a new message appearing every second.

[email protected]:/workspaces/core3container# dotnet run
Starting
Async message #1
Async message #2
Async message #3
...
Async message #10
Finished

8. Cleaning up

When you exit Visual Studio Code, it will automatically stop the running container, but it does not delete the container (you can still see it with docker ps -a) or any Docker images. This means that it will be faster to start up next time you do some remote container development on this project.

But you will need to manually remove the containers and images when you're done - there doesn't seem to be any built-in support for cleaning up.

9. Try other stuff

Of course, this feature isn't only for .NET Core development, or just for when you want to use preview SDKs. You can use it to develop in any language and get an extremely consistent development experience across all members of your team without having to tell everyone to install specific versions of development tools.

There's also a really nice collection of quick starts which are GitHub repos you can clone which already have the .devcontainer set up for various languages, including Node, Python, Rust, Go, Java and more. I used it to create my first ever Rust app, which I had up and running in just a couple of minutes and without needing to install any new tools on my development PC, thanks to VS Code remote containers.


0 Comments Posted in:

It's possible to package up an Azure Functions App inside a Docker container, which gives you the flexibility to run it on premises, or in another cloud other than Azure, and of course wherever you can run Kubernetes. For instructions on how to get your Azure Function app running in a container, check out my article here.

Introducing KEDA

While it's great that this opens the door to running Azure Functions anywhere, until recently there was one notable drawback to using containerized Azure Function Apps. And that was the fact that the powerful auto-scaling features of the Azure Functions consumption plan are not available. It would be up to you to scale out to the appropriate number of containers.

However, the KEDA project (Kubernetes-based Event Driven Autoscaling component) is designed to solve this problem. With KEDA installed on Kubernetes, you can benefit from auto-scaling, so that additional pods will be created as needed when your Function App is under heavy load, and it can scale right down to zero if your app is idle.

It's still in its early days, and only supports a limited number of triggers, but it's already a great option when you need or want to host your Function Apps on Kubernetes.

Demo scenario

In this post, we're going to take an existing containerized Azure Function App (a very simple one I created as part of my Create Serverless Functions Pluralsight course), and install it onto AKS with KEDA configured.

For this demo I'll be using the Azure CLI from PowerShell, and I've also got the Azure Functions Core Tools installed. I've also got Docker Desktop for Windows installed which includes kubectl.

Step 1 - Create an AKS cluster

First, let's create a new AKS cluster. Of course you don't have to use AKS - you can use Kubernetes hosted anywhere. (by the way, there seems to be a bug with az aks create at the moment which means it can fail if the service principal it creates doesn't get created quickly enough. The workaround is to create your own service principal with az ad sp create-for-rbac --skip-assignment and then use the --service-principal and --client-secret arguments in the call to az aks create)

# create a resource group
$aksrg = "KedaTest"
$location = "westeurope" 
az group create -n $aksrg -l $location

# create the AKS cluster
$clusterName = "MarkKedaTest"
az aks create -g $aksrg -n $clusterName --node-count 3 --generate-ssh-keys

That will take about 5 minutes to complete, and once it's done, we fetch the credentials allowing kubectl to talk to it:

# Get credentials for kubectl to use
az aks get-credentials -g $aksrg -n $clusterName --overwrite-existing

# Check we're connected
kubectl get nodes

Step 2 - Install KEDA

Now let's install KEDA onto our AKS cluster. This is done using the Azure Functions Core Tools, and actually installs two things - KEDA which enables autoscaling everything but HTTP triggered functions to zero, and Osiris which enables HTTP triggered functions to also scale to zero.

# install KEDA on this AKS cluster
func kubernetes install --namespace keda

Step 3 - Deploy an Azure Function App

Now let's deploy our Function App. We're going to use this existing containerized Azure Function app, which I created as part of my Create Serverless Functions Pluralsight course. The code is available here on GitHub, and if you want to containerize your own Azure Function app, it's quite a simple process I walk through here.

The command we're going to use to deploy the app is func kubernetes deploy and there are a few different options for how to use this. You can see some of the options in this article here, but I'm going to take a slightly different approach.

Step 3a - Prepare the secrets

By default, the func kubernetes deploy command is going to look for a local.settings.json file and use that to generate a Kubernetes secret containing the environment variables for your container. That might not be exactly what you want, so you are free to point it at your own Kubernetes secret instead.

For this demo, I'm actually going to auto-generate a local.settings.json file with the exact settings I want. In particular I need to set the AzureWebJobsStorage connection string to a real Azure Storage Account connection string, as my demo app uses Table Storage to store the state of TODO items.

I also need to set up a WEB_HOST environment variable, as my function app uses Azure Functions proxies to pass through HTTP requests to some static web resources hosted in blob storage. I've got these publicly available at https://serverlessfuncsbed6.blob.core.windows.net/website so you can use that if you want to try this for yourself.

So here's my PowerShell script to generate a temporary local.settings.json file:

$connStr = az storage account show-connection-string -g "SharedAssets" -n "mystorageaccount" -o tsv
$staticFiles = "https://serverlessfuncsbed6.blob.core.windows.net/website"
@{
    "IsEncrypted" = $false;
    "Values" = @{
      "AzureWebJobsStorage" = $connStr;
      "FUNCTIONS_WORKER_RUNTIME" = "dotnet";
      "WEB_HOST" = $staticFiles
    };
    "Host" = @{
        "CORS" = "*"
    }
} | ConvertTo-Json | Out-File .\local.settings.json -Encoding utf8

Step 3b - Generate the Kubernetes YAML file

Now I'm going to use func kubernetes deploy to generate a Kubernetes YAML file for our Function App. I'll specify the Docker image we want to use, and the --dry-run flag means that it generates the YAML.

$funcDeployment = "keda-demo"
func kubernetes deploy --name $funcDeployment --image-name "markheath/serverlessfuncs:v3" --dry-run > deploy.yml

When this is complete our deploy.yml file will include all the Kubernetes object definitions we need to deploy our Function App to Kubernetes. This includes Base64 encoded versions of the secrets in our local.settings.json files, so make sure you don't check either file into source control.

Step 3c - Deploy the Function App

Now all we need to do is use kubectl apply to create the necessary resources on our Kubernetes cluster:

kubectl apply -f .\deploy.yml

We should see the following resources get created...

secret/keda-demo created
service/keda-demo-http created
deployment.apps/keda-demo-http created
deployment.apps/keda-demo created
scaledobject.keda.k8s.io/keda-demo created

Step 4 - Testing it out

To test this we will need the public IP address of our service

kubectl get service --watch

Eventually the external IP address will appear:

NAME             TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
keda-demo-http   LoadBalancer   10.0.160.79   <pending>     80:32756/TCP   13s
kubernetes       ClusterIP      10.0.0.1      <none>        443/TCP        20m
keda-demo-http   LoadBalancer   10.0.160.79   131.91.133.243   80:32756/TCP   96s

And if we visit that IP address in a browser, we'll see the basic TODO application.

image

It's not particularly exciting app, but it does have a timer scheduled function that deletes completed TODO items every five minutes, and if we wait a while we can see that works successfully.

Step 4b - Testing scaling

When we deploy our app with KEDA, we actually end up with two deployments - one specifically for the HTTP triggered functions, and the other one to handle all other functions. It would be nice to see these scaling up when the function app is busy, and down to zero when it is idle.

PS C:\Code\azure\keda> kubectl get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
keda-demo        1/1     1            1           109m
keda-demo-http   1/1     1            1           109m

Unfortunately, this particular demo Function App is not ideal for demoing scaling. The keda-demo pod is simply running a timer-triggered function every five minutes, so it will never need to scale up, and won't scale down to zero.

The other pod (keda-demo-http) ought to scale depending on the HTTP traffic, but I've not been able to get it working yet. It might be that there are some issues with HTTP triggered scaling at the moment as I encountered a few bug reports, and the underlying scaling technology is still in preview.

To properly demo KEDA scaling it would be better to have created a Function App based on queue triggered functions. There's a great short demo of KEDA auto-scaling with queues from Jeff Hollan available here (the demo starts 7 minutes in, and the autoscaling happens about 15 minutes in).

Summary

In this post we walked through the basic steps to install KEDA and run a containerized Function App on Azure Kubernetes Service. Although the particular demo app I installed doesn't showcase the benefits of KEDA, it's great that this auto-scaling functionality is now available for anyone hosting their Azure Function Apps in Kubernetes, and I'm looking forward to seeing how it evolves.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions