0 Comments Posted in:

Recently I've been having a lot of discussions with teams wanting to move towards a cloud-based microservices architecture. And inevitably the question arises whether the best choice would be to go with containers, or a serverless "Functions as a Service" (FaaS) approach.

To keep this discussion from becoming too abstract, let's imagine we're planning to host our application in Azure. Should we create an AKS (Azure Kubernetes Service) cluster and implement each microservice as a container? Or should we use Azure Functions, and implement each microservice as a Function App?

And to keep this article from becoming too long, I'm going to restrict myself to making just a few key points in favour of both approaches.

It's not either/or

First, it's important to point out that hybrid architectures are possible. There is no rule preventing you from using both AKS and Azure Functions, playing to the strengths of each platform. And if you're migrating from a monolith, you may well be running alongside some legacy Virtual Machines anyway.

Also, if you like the Azure Functions programming model, it's quite possible to host Azure Functions in a container. And if you like the consumption-based pricing model and elastic scale associated with serverless, then technologies like Azure Container Instances can be combined with AKS to essentially give you serverless containers.

And while serverless essentially forces you in the direction of PaaS for your databases, event brokers, identity providers etc, you can do exactly the same with containers - there's no reason why they can't reach out to PaaS services for these concerns rather than containerizing everything.

A few strengths of containers

What factors might cause us to favour containers?

Containers are particularly good for migrating legacy services. If you've already implemented a batch process, or web API, then getting that running in a container is much easier than rewriting it for serverless.

Containers make it trivial for us to adopt third party dependencies that aren't easily available (or cost-effective) as PaaS. There's a wealth of open source containerized services you can easily make use of such as Redis, RabbitMQ, MongoDb, and Elasticsearch. You have freedom choose when and if it makes sense to switch to PaaS versions of these services (one nice pattern is to use containerized databases for dev/test environments, but a PaaS database like Azure SQL Database in production).

Containers have a particularly good story for local development. If I have 20 microservices, I can bundle them all into a Docker compose file, and start them all up in an instant. With serverless, you need to come up with your own strategy for how developers can test a microservice in the context of the overall application.

A containerized approach can also simplify the security story. With serverless, you're typically exposing each microservice with a HTTP endpoint publicly on the internet. That means each service could potentially be attacked, and great care must be taken to ensure only trusted clients can call each service. With a Kubernetes cluster, you don't need to expose all your microservices outside the cluster - only certain services are exposed by an ingress controller.

A few strengths of serverless

What are some key strengths of serverless platforms like Azure Functions?

Serverless promotes rapid development by providing a simplified programming model that integrates easily with a selection of external services. For example, with Azure Functions, makes it trivial to connect to many Azure services such as Azure Service Bus, Cosmos DB and Key Vault.

Serverless encourages an event-driven nanoservice model. Although containers place no constraints on what programming models you use, they make it easy to perpetuate older development paradigms involving large heavyweight services. Serverless platforms strongly push us in the direction of event-driven approaches which are inherently more scalable, and promote light-weight small "nanoservices" that can be easily discarded and rewritten to adapt to changing business requirements (a key driver behind the idea of "microservices").

Serverless can offer extremely low cost systems, by supporting a "scale to zero" approach. This is extremely compelling for startups, who want to keep their initial costs to a minimum during a proof of concept phase, and also allows lots of dev/test service deployments in the cloud without worrying about cost. By contrast, with containers, you would almost always have a core number of nodes in your cluster that were always running (so with containers you might control cost either by running locally, or by sharing a Kubernetes cluster).

Serverless also excels in supporting rapid scale out. Azure Functions very quickly scales from 0 to dozens of servers under heavy load, and you're still only paying for the time your functions are actually running. Achieving this kind of scale out is more work to configure with containerized platforms, but on the flip side, with container orchestrators you will have much more control over the exact rules governing scale out.


Both containerized and serverless are excellent approaches to building microservices, and are constantly borrowing each other's best ideas, so the difference isn't huge (and maybe this question won't even be meaningful in 5-10 years).

Which one would I pick? Well, I think for a more "startupy" application, where it's greenfield development with a small number of developers trying to prove out a business idea, I think serverless really shines, whereas for more "enterprisey" applications, with a lot more components, development teams and maybe some legacy components involved, I think containerized approaches are more promising. In fact, most systems I work on are essentially "hybrid" - combining aspects of serverless, containers and plain old virtual machines.

Finally, for an amusing take on the topic, make sure you check out this genius serverless vs containers rap battle from the Think FaaS podcast.

Want to learn more about how easy it is to get up and running with Azure Container Instances? Be sure to check out my Pluralsight course Azure Container Instances: Getting Started.

0 Comments Posted in:

The dotnet new command

One of my favourite things about .NET Core is the dotnet command line tool. With dotnet new, you can quickly scaffold a new project of various types. For example dotnet new webapp creates an ASP.NET Core web app. And if you simply type dotnet new you can see a list of all of the available templates.

Templates               Short Name         Language        Tags
Console Application     console            [C#], F#, VB    Common/Console
Class library           classlib           [C#], F#, VB    Common/Library
Unit Test Project       mstest             [C#], F#, VB    Test/MSTest
NUnit 3 Test Project    nunit              [C#], F#, VB    Test/NUnit
NUnit 3 Test Item       nunit-test         [C#], F#, VB    Test/NUnit
xUnit Test Project      xunit              [C#], F#, VB    Test/xUnit
Razor Component         razorcomponent     [C#]            Web/ASP.NET
Razor Page              page               [C#]            Web/ASP.NET
MVC ViewImports         viewimports        [C#]            Web/ASP.NET

Installing templates

Of course, there may not be templates available out of the box that meet your needs, but it's very easy to install additional templates. For example, if you want to create a Vue.js project, you can install a new template pack with dotnet new --install "Microsoft.AspNetCore.SpaTemplates" and then create a new project with dotnet new vue.


Now as cool as this feature is, it left me with a bunch of questions. Where are all these templates coming from? If I install a template pack, how do I keep it up to date? How do I find out what other template packs are available? If I wanted to make my own template, how would I do that? So I did a bit of digging, and here's what I found.

Templates are stored in NuGet packages

Templates are distributed as NuGet packages (.nupkg), typically hosted on NuGet.org, but you can install them from any NuGet server. The Vue.js template pack I mentioned earlier can be found here. Knowing this is very handy as it enables you to see whether the package is still being actively maintained, and whether there have been recent updates. Looks like this particular template pack hasn't been updated in a while.

How do I know what's available?

How can you find out what template packs are available? There are two main ways I know of.

First, there's this list maintained on GitHub containing many packages.

Second, there's a great searchable website at dotnetnew.azurewebsites.net. So if I'm looking for more up to date Vue.js templates, I can see that there is a very wide choice available.

How do I know what template versions I have?

This one took me a while to find, but I discovered that if you type dotnet new -u (the uninstall command) it gives you a really nice summary of each package installed, in this kind of format.

      NuGetPackageId: Microsoft.AspNetCore.Blazor.Templates
      Version: 0.7.0
      Author: Microsoft
      Blazor (hosted in ASP.NET server) (blazorhosted) C#
      Blazor Library (blazorlib) C#
      Blazor (Server-side in ASP.NET Core) (blazorserverside) C#
      Blazor (standalone) (blazor) C#
    Uninstall Command:
      dotnet new -u Microsoft.AspNetCore.Blazor.Templates

This command also conveniently shows the syntax to uninstall a template package.

How do I know when updates are available?

Of course, you don't want to have to constantly visit NuGet.org to check up on new versions of template packs, so how do you know when something updated is available? Well the good news is that there are a couple of helpful commands here for you.

First, to update a package to it's latest version, you can always simply install it again. So if I say dotnet new -i Microsoft.AspNetCore.Blazor.Templates, then I'll either install the Blazor templates, or update to the latest (non-prelease) version if they are already installed.

There are also a couple of new commands that perform an update check for you. dotnet new --update-check will check to see if there are new versions of any installed templates, and dotnet new --update-apply also updates them for you.

Note: I attempted to use this feature by deliberately installing an older version of a template, and then running the update check, but it reported that no updates were available. I don't know if that was because by explicitly specifying a version I had perhaps "pinned" to that version, or whether it was just a temporary glitch with the tool.

How do I install a specific template version?

Because templates are stored in NuGet packages, you might want to install a specific version (maybe a pre-release). For example, at the moment, to play with the new WebAssembly Blazor features, you need to install a pre-release of the Microsoft.AspNetCore.Blazor.Templates. That can easily be done by appending :: and the package number after the package name:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview9.19465.2

How can I create my own templates?

Finally, you might be wondering what it takes to create your own template. I'm not going to go into detail here, as there's a helpful tutorial on the Microsoft docs site. But it's relatively straightforward. You create the source files for the template, and a template.json that contains template metadata.

A great way to get a feel for what's possible is to use the excellent NuGet Package Explorer utility to take a look inside the contents of existing NuGet package templates.


dotnet new is a great productivity tool and once you know a little bit more about what's going on behind the scenes, you can confidently install and update additional template packs to greatly speed up your daily development tasks.

0 Comments Posted in:

If you work on a commercially successful project, the chances are you've experienced the pain of technical debt. As a codebase grows larger and larger, inevitably you find that some of the choices made in the past are resulting in a slowdown in productivity.

It's a problem I've given a lot of thought to over the years. I created Pluralsight course on "Understanding and Eliminating Technical Debt" and often speak about it. In fact I'll be doing so again next month at Technorama Netherlands (would be great to see you there if you can make it!)

Tracking technical debt

One of my recommendations is that technical debt should be tracked. In my Pluralsight course, I suggested creating a "technical debt document". In this document, you list specific issues that need addressing, explain what problems they are currently causing, and what the proposed solution is. Other useful information includes estimating how long a fix would take, and identifying upcoming features in the roadmap that will become easier to implement once this technical debt item has been resolved.

Technical debt can come in many forms. I often break it down into categories like "Code debt", "Architectural debt", "Technological debt", "Test debt" etc, but the common theme is that there is something less than ideal about the code or tooling that needs to be addressed. By tracking these issues somewhere, you can prioritize and plan to address them.

Book recommendation: "Managing Technical Debt"

I recently discovered a great new book, Managing Technical Debt, written by Philippe Kruchten, Robert Nord and Ipek Ozkaya. The authors have researched technical debt at an academic level, (there is even a conference about technical debt now!) and so I was very eager to read what they had to say to pick up any new ideas.

Tech debt book

It's a great read, and written at a level that you could share it with both developers and managers alike. It's practical and pragmatic, and it was nice to see that they share a very similar understanding and approach to technical debt to the one I take in my Pluralsight course.

But there were several new insights, and the one I want to highlight in this post is what the best way to store "technical debt items" (as they call them) is.

Technical debt register

In the book, the authors recommend having a "technical debt register". It's similar to my document idea, but they recommend using your regular work tracking tools to store the "technical debt items". In other words, wherever you record your defects or backlog of new features (e.g. GitHub issues, Jira, Azure DevOps), should also be where you store technical debt items.

I recommend including debt actions (stories) in the same tracking tool as all other work and as part of a single program or team backlog. Our agile-inspired phrasing is "all work is work; all work goes on the backlog" Kruchten, Nord & Ozkaya, Managing Technical Debt

The reasoning for this is simple. Technical debt, just like defects and features, represents work that needs to be done. So it needs to be visible and able to be planned in. You could either create a new entity to represent a technical debt item, or just tag items with "TechDebt" to differentiate them.

This idea is one I had initially considered but rejected because I feared that it could be abused. What if the technical debt register filled up with thousands of "leftover" tasks that were in reality never going to get actioned? A kind of way to assuage our guilt that we didn't really finish our work. "Didn't get round to writing any unit tests? Don't worry! Just add it to the technical debt register!"

However, having read the book, I think I'm coming round to their way of thinking. I can see a number of benefits over using a document for tracking technical debt items:

  • they can be planned and estimated with the same tooling used for defects and features
  • you can associate commits to technical debt items
  • technical debt items can also be linked to featured and defects
  • they support discussion (not everyone will agree on the best way to address these issues)
  • they are easily added by anyone on the team (don't need to know where to find the document, or wait to check it out)

Two levels of technical debt items

The authors suggest that there are two levels of technical debt item: (1) simple, localized code-level debt and (2) wide-ranging, structural, architectural debt.

The simple items relate to a specific area of code and might take a day or so to address. A team could allocate a certain percentage of time each iteration to resolving these technical debt items.

Examples of wide-ranging items might be wanting to move from a monolith to microservices, or switching from server side rendering to a SPA framework. These are not necessarily achievable in one step, and need to be broken down into smaller chunks. In some cases, dedicating an entire iteration to working on one of these technical debt items might be warranted.

Automating creation of technical debt items

Another interesting idea is whether static code analysis tools could generate technical debt items. Personally, my fear here is that there would simply be too many of these, and many static code analysis tools generate high numbers of "false positives".

So personally although I find a lot of value in static code analysis tools, I wouldn't want to automatically convert each item discovered into a technical debt item. My preference is to use code analysis tools that give you immediate feedback as you are coding (this is one of the great strengths of ReSharper) as the most efficient time to address problems with code is while you are still working on it.

If a static analysis tool does highlight some particular areas of concern in the codebase, then you could always manually create an entry in the technical debt register that groups related instances together into a single item, rather than creating one entry per offending line of code.

Another good suggestion was to standardize on a way of marking technical debt issues in code. Many developers include comments like "TODO" or "FIXME", as a way to highlight improvements that are desirable in the future, but for whatever reason could not be done at the time. By adopting a standard, it's easy to find those items and generate technical debt items as necessary in the technical debt register.

Prioritizing technical debt items

Let's suppose we follow this guideline and create a technical debt register. Now we've got hundreds of items, big and small, each detailing a way in which our code should be improved to make our lives easier going forward. But how do we prioritise them?

Well, I would not recommend simply randomly picking off technical debt items to solve. Technical debt items are only potential problems - they're not actually causing any harm unless you are working on a specific part of your system.

So it's really important that your technical debt items are associated with a specific area of code. That way, when you are about to embark on new work in that area of code, you can review the technical debt register to see what known issues might get in your way. This means you can strategically address the ones that will most benefit upcoming work.

"Before starting a new feature or story, check the backlog to identify any known debt items that should be considered during implementation because they impact the same area of the code or would otherwise impede its development" (Kruchten, Nord & Ozkaya, Managing Technical Debt)

Numbers don't matter

It's also important to point out that it really doesn't matter if the number of technical debt items grows very large. Remember they are only potential issues, not actual problems, and it's totally fine if many of them are never addressed.

Of course, you might want to eventually prune some that have sat dormant for several years, or that have been obsoleted by other advances in the code. But they're not like defects. With defects, we typically want to get to a count of zero. Technical debt items are more akin to cool ideas for future features: they're not all going to get done - only the ones that bring real value.


I highly recommend having at least some way to track outstanding technical debt items for your projects, and reading the Managing Technical Debt book has convinced me to give tracking them in the standard project management tools a go. So I'm planning to migrate the issues listed in my existing technical debt document into our regular working tool and see if that helps us more effectively plan and prioritise which technical debt items should be addressed next.

Want to learn more about the problem of technical debt and how you can reduce it? Be sure to check out my Pluralsight course Understanding and Eliminating Technical Debt.