0 Comments Posted in:

"Serverless" architecture is one of the most exciting innovations to emerge in the cloud computing space in recent years. It offers several significant benefits including rapid development, automatic scaling and a cost-effective pricing model. Regular readers of my blog will know that I have been (and still am) an enthusiastic proponent of Azure Functions.

But "serverless" does entail some trade-offs. For every benefit of "serverless" there are some corresponding limitations, which may be enough to put some people off adopting it altogether. And it also can seem to be at odds with "containerized" approach to architecture, with Kubernetes having very much established itself as the premier approach to hosting cloud native applications.

I think the next stage of maturity for "serverless" is for the up-front decision of whether to use a "serverless" architecture or not to go away, and be replaced by a kind of "sliding scale", where the decision of whether to run "serverless" or not is a deploy-time decision rather than being baked in up front.

To explain what I mean, let's look at five key benefits of serverless, and how in some circumstances, they introduce limitations that we want to get around. And we'll see that we're already close to a situation where a "sliding scale" allows us to make our application more or less serverless depending on our needs.

Servers abstracted away

The first major selling point of "serverless" is that servers are abstracted away. I don't need to manage them, patch them, or even think about them. I just provide my application code and let the cloud provider worry about where it runs. This is great until I actually do care for some reason about the hardware my application is running on. Maybe I need to specify the amount of RAM, or require a GPU or an SSD. Maybe for security reasons I want to be certain that my code is not running on shared compute with other resources.

Azure Functions is already a great example of the flexibility we can have in this area. It's multiple "hosting plans" allow you to choose between a truly serverless "consumption" plan where you have minimal control of the hardware your functions are running on, all the way up to "premium" plan with dedicated servers, or containerizing your Function App and running it on hardware of your choice.

Automatic scale in and scale out

A second major attraction of serverless is that I don't need to worry about scaling in and scaling out. The platform itself detects heavy load and automatically provisions additional compute resource. This is great until I need to eliminate "cold starts" caused by scaling to zero, or need to have more fine-grained control over the maximum number of instances I want to scale out to, or want to throttle the speed of scaling in and out.

Again, we're seeing with serverless platforms an increased level of flexibility over scaling. With Azure Functions, the Premium plan allows you to keep a minimum number of instances on standby, and you can even take complete control over scaling yourself by hosting your Functions on Kubernetes and using KEDA to manage scaling.

Consumption based billing

A third key benefit of serverless is only paying for what you use. This can be particularly attractive to startups or when you have dev/test/demo deployments of your application that sit idle for much of the time. However, the consumption-based pricing model isn't necessarily the best fit for all scenarios. Some companies prefer a predictable monthly spend, and also want to ensure costs are capped (avoiding "denial of wallet" attacks). Also many cloud providers such as Azure can offer significantly reduced "reserved instance" pricing, which can make a lot of sense for a major application that has very high compute requirements.

Once again, Azure Functions sets a good example for how we can have a sliding scale. The "consumption" hosting plan is a fully serverless pricing model, whilst you can also host on a regular ("dedicated") App Service plan to get fixed and predictable monthly costs, with the "premium" plan offering a "best of both worlds" compromise between the two. And of course the fact that you can host on Kubernetes gives you even more options for controlling costs, and benefitting from reserved instance pricing.

Binding-based programming model

Another advantage associated with serverless programming models is the way that they offer very simple integrations to a variety of external systems. In Azure Functions, "bindings and triggers" greatly reduce the boilerplate code required to interact with messaging systems like Azure Service Bus, or reading and writing to Blob Storage or Cosmos DB.

But this raises some questions. Can I benefit from this programming model even if I don't want to use a serverless hosting model? And can I benefit from serverless hosting without needing to adopt a specific programming model like Azure Functions?

The answer to both questions is yes. I can run Azure Functions in a container, allowing me to benefit from its bindings without needing to host it on a serverless platform. And we are increasingly seeing "serverless" ways to host containerized workloads (for example Azure Container Instances or using Virtual Nodes on an AKS cluster). This means that if I prefer to use ASP.NET Core which isn't inherently a serverless coding model, or even if I have a legacy application that I can containerize, I can still host it on a serverless platform.

As a side note, one of the benefits of the relatively new "Dapr" distributed application runtime is the way that it makes Azure Functions-like bindings easily accessible to applications written in any language. This allows you to start buying into some "serverless" benefits from an existing application written in any framework.

Serverless databases

In serverless architectures, you typically prefer a PaaS database, rather than hosting it yourself. Azure comes with a rich choice of hosted databases including Azure SQL Database and Azure Cosmos DB. What we've also seen in recent years is a "serverless" pricing model coming to these databases, so that rather than a more traditional pricing model of paying a fixed amount for a pre-provisioned amount of database compute resource, you pay for the amount of compute you actually need, with the database capacity automatically scaling up or down as needed.

Of course this comes with many of the same trade-offs we discussed for scaling our compute resources. If your database scales to zero you have a potential cold start problem. And costs could be wildly unpredictable, especially if a bug in your software resulted in a huge query load. Again, the nice thing is that you don't have to choose up front. You could deploy dev/test instances of your application with serverless databases to minimise the costs given that they may be idle much of the time, but for your production deployment you choose to pre-provision sufficient capacity for expected loads, maybe allowing some scaling but within a much more carefully constrained minimum and maximum level.


"Serverless" does not have to be an "all-in" decision. It doesn't even need to be an "up front" decision anymore. Increasingly you can simply write code using the programming models of your choice, and decide at deployment time to what extent you want to take advantage of serverless pricing and scaling capabilities.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions

0 Comments Posted in:

One of the questions I frequently get asked by people who watch my Durable Functions Fundamentals Pluralsight course is whether you can use dependency injection with Durable Functions (as my demo app uses static methods). The answer is yes, and it's quite simple although there are a couple of considerations about logging that are worth pointing out.

In this post I'll give a quick overview of the main steps, and you can get more details on the official docs site if you'd like to dive further into the topic of dependency injection in Azure Functions.

UPDATE: I should point out that in this post I am using the in-process Azure Functions programming model using .NET Core 3.1, rather than the out-of-process .NET 5 model, because that does not support Durable Functions. In the future, the out-of-process model is going to support Durable Functions, but the out-of-process model uses a different approach for setting up dependency injection, so this tutorial does not apply for .NET 5 Azure Function apps.

Step 1 - Add NuGet references

First of all, if you've already created your Azure Function app, you need to add two NuGet references to your csproj file. These are Microsoft.Azure.Functions.Extensions and Microsoft.Extensions.DependencyInjection.

<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.1.0" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.DurableTask" Version="2.5.0" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.13" />

Step 2 - Register Services at Startup

The next step is to create a Startup class derived from FunctionsStartup and override the Configure method. In here you can set up whatever dependencies you need with AddSingleton or AddTransient.

The example I show below also calls AddHttpClient to register IHttpClientFactory. And you could even register a custom logger provider here if you need that.

Note that we also need to add an assembly level attribute of FunctionsStartup that points to the custom Startup class.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

[assembly: FunctionsStartup(typeof(DurableFunctionApp.Startup))]

namespace DurableFunctionApp
    public class Startup : FunctionsStartup
        public override void Configure(IFunctionsHostBuilder builder)
            builder.Services.AddHttpClient(); // registers IHttpClientFactory
            builder.Services.AddSingleton<IGreeter>(_ => new Greeter());

Step 3 - Injecting Dependencies

Injecting dependencies is very simple. Instead of defining functions as static methods on a static class, just create a regular class with a constructor that takes the dependencies and stores them as class members. These can then be used in the functions themselves which are now instance methods.

In this example I'm using the injected IGreeter in an activity function. You can use dependencies in orchestrator functions as well, but remember that the strict rules of orchestrator functions must still be adhered to.

public class MyOrchestration
    private readonly IGreeter greeter;

    public MyOrchestration(IGreeter greeter)
        this.greeter = greeter;

    public string SayHello([ActivityTrigger] string name, ILogger log)
        log.LogInformation($"Saying hello to {name}.");
        return greeter.Greet(name);

And that's all there is to it. Although I did say I'd mention a few gotchas with logging.

Gotchas - injecting loggers

When you set up dependency injection in an Azure Functions project, you might be tempted to attempt to inject an ILogger using the class constructor, rather than having an ILogger as a parameter on every function. If you do this you'll run into a few problems.

First, you can't inject the generic ILogger - that doesn't get registered by default. Instead you have to inject an ILogger<T> - so ILogger<MyFunctions> for example.

public class MyFunctions
    private readonly ILogger<MyFunctions> logger;
    private readonly IGreeter greeter;

    public MyFunctions(ILogger<MyFunctions> logger, IGreeter greeter)
        this.logger = logger;
        this.greeter = greeter;

Second, the logs that you write to that ILogger<T> will get filtered out unless you update your host.json file to include logs for your namespace. In this example we're turning logging on for the MyDurableFunctionApp namespace.

  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingExcludedTypes": "Request",
      "samplingSettings": {
        "isEnabled": true
    "logLevel": {
      "MyDurableFunctionApp": "Information"

And third, when you use ILogger in an orchestrator function, the best practice is to use CreateReplaySafeLogger on the IDurableOrchestrationContext. UPDATE - I initially thought that this doesn't work with an ILogger<T> but I was mistaken. The code snippet below shows how to create a replay safe logger from the injected logger.

private readonly ILogger<MyFunctions> injectedLogger; // set up in the constructor

public static async Task<List<string>> RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
    var outputs = new List<string>();
    var log = context.CreateReplaySafeLogger(injectedLogger);

    log.LogInformation("about to start orchestration...");

It may be that there are some other ways round these issues, so do let me know in the comments.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

I've been a bit quiet here on my blog for the past few months, partly because I've had plenty of stuff outside of work keeping me busy, and partly because I've been working away at several updates to my Pluralsight courses.

The downside of creating courses about Azure is that it is an extremely fast-moving space. New features and services are constantly being added to the platform. The portal changes very frequently, so I've had plenty of demos that needed re-recording to show the updated UI.

One particularly nice thing about this round of updates is that some of the demos are shorter! It's nice to see the need for workarounds and complex pre-requisite setups being removed. I think the next few years of updates to cloud services needs to be more focused on simplification and ease of use, rather than adding in loads of additional features, and so it's pleasing to see that happening with many.

Anyway, here's a quick rundown of what's changed the six Pluralsight courses I've updated recently:

Durable Functions Fundamentals

I recorded the original version of my Durable Functions Fundamentals course just before version 2 of Azure Functions was released, and there were also a couple of small breaking changes to the Durable Functions extension itself. So this update was my largest, re-recording all the demos with the latest versions of Visual Studio, Azure Functions and the Durable Functions extension.

Durable Functions remains one of my favourite capabilities of Azure Functions, and is well worth considering if you are implementing any kind of long-running business workflows. It's great to see Durable Functions continue to improve and is now much easier to host in a containerized environment so you can benefit from it's capabilities even if you're not hosting in Azure.

Deploying and Manage Containers

My Microsoft Azure Developer: Deploying and Managing Containers covers a basic introduction to Docker, and then surveys all the many ways you can run containers in Azure. Many of the demo recordings have been updated to reflect changes in tooling, and base container image names.

Probably the biggest change is that Azure Service Fabric Mesh has been retired. In one sense this feels a real shame, as I felt it was a really great idea, providing a really easy to use serverless containerized microservices hosting platform. However, I think that the idea behind Service Fabric Mesh lives on, and we are seeing the emergence of similar platforms based on Kubernetes instead, that will hopefully soon offer all the benefits and simplicity that Service Fabric Mesh promised.

I also updated the Azure Kubernetes Service demos to reflect the many changes in the portal. One really nice simplification is that now you can view and manage Kubernetes resources directly within the Azure Portal - removing some of the complexities of setting up the old dashboard experience.

Create Serverless Functions

My Microsoft Azure Developer: Create Serverless Functions course also had a fairly substantial update. I focused particularly on updating the Visual Studio and VS Code demos, as well as the places where the portal was shown. I also updated the module on containerizing Azure Functions as the base image names have changed.

Microservices Fundamentals and Building Microservices

I also updated two of my microservices courses, Microservices Fundamentals and Building Microservices. These courses are intended to teach the principles of microservices rather than showing implementation specifics, so there were fewer changes needed.

However, the reference demo application eShopOnContainers has been updated and improved somewhat since I first recorded the course. So I have updated all the demo recordings to show a newer version of eShopOnContainers (the exact version I use is my fork on GitHub).

It's nice to see that the updated eShopOnContainers is simpler to work with. The docker build command is simpler, running it with WSL 2 on Windows requires less setup effort, and the integration tests run out of the box in Visual Studio 2019 now thanks to the built-in container integration that automatically starts up the dependent containerized services like RabbitMQ. One slight change of note is that when you access the homepage you need to visit it via host.docker.local instead of localhost or the identity microservice won't accept the redirect URL when you log in.

Implement Azure Functions (AZ-204)

My Microsoft Azure Developer: Implement Azure Functions course is a very short and focused course intended to provide the background information about Azure Functions necessary for taking the AZ-204 certification (Developing Solutions for Microsoft Azure). A new objective was recently added to the exam which is to implement custom handlers. To be honest, I was a little surprised this is expected knowledge for the exam as custom handlers are something that I think the majority of Azure Functions developers will not need to make use of. But it is useful to at least know they exist and what scenarios they can help with, so I added a short module explaining that.

What's next?

There are a few other Pluralsight courses of mine that would benefit from an update, as well as some ideas I have for new courses, so there may be some more courses released later this year. I have also submitted a few talk ideas for upcoming conferences - it's great to see that these are coming back, and I was particularly pleased to see that the South Coast Summit is being held very close to where I live - it would be great to see you there if you can make it along.