0 Comments Posted in:

I think its fair to say that "microservices" has established itself as the leading way to architect a modern distributed cloud-native application. I've discussed many of the advantages of this approach over "monolithic" architectures in my Pluralsight courses such as Microservices Fundamentals

But it's also well known that microservices bring a lot of challenges with them. How do you perform service discovery? How do you enable developers to easily work with the services locally? How do you implement upgrades while minimizing downtime? How do you effectively monitor everything in a centralized location?

Fortunately, there are multiple tools and frameworks designed to help overcome these challenges. Kubernetes is an excellent orchestration platform that helps us immensely with challenges like deployments, observability, and service discovery. Frameworks like ASP.NET Core comes with a whole host of practical features ready to use out of the box like configuration, logging, and health endpoints. Cloud providers like Azure provide a wide variety of PaaS services that can easily be plugged into a microservices application.

So, in theory it ought to be really easy to develop, test, maintain and deploy microservice applications? Well, not so fast. Much progress has been made, but there is a way to go. And projects like Dapr seek to improve things...

What is Dapr?

At first glance Dapr might seem fairly unimpressive. It offers a collection of "building blocks" that solve several challenges relating to building microservices. These building blocks include service to service invocation, pub sub messaging, state management, observability, and secret management.

But don't we already have solutions to all of these? Anyone who's built a microservices application has already had to deal with all those problems, and the tools and frameworks we've already mentioned go a long way to easing the pain.

However, I do think Dapr offers something unique. To illustrate, I'm just going to pick one of the building blocks - service to service invocation, to highlight how Dapr can provide added value on top of what you are already using.

An Example: Service to Service Invocation

When one microservice needs to call another, several things need to happen.

First, we need service discovery - to find the address of the service we're communicating with. Of course, Kubernetes makes this pretty painless with inbuilt DNS. But it's not uncommon for developers to run microservices locally on their development machines. In which case each microservice is at localhost on a specific port number, which requires you to have some alternative mechanism in place to point to the correct service when running locally. With Dapr, you can address the target service by name regardless of whether you're running in "self-hosted" mode (directly on your machine) or on Kubernetes.

Second, when communicating between microservices it's important to retry if there are transient network issues. Of course this is possible to implement yourself with libraries like Polly, but that requires everyone to remember to use it and only recently I found a bug in a microservice caused by forgetting to implement retries. With Dapr, this is just a built-in capability.

Third, it's very important that communication between microservices is secured. Communications should be encrypted using mTLS, and authentication should be used to validate that the caller is authorized. A widely recognized best practice is to use mutual TLS, but this can be a pain to configure correctly, and often gets in the way when you are running locally in development. With Dapr, all service to service communications are automatically encrypted for you with mTLS, and certificates are automatically cycled. This takes a huge headache away.

Fourth, it's very valuable if you have distributed tracing and metrics gathering to give you a way to understand the communications between your microservices. Azure offers this with Application Insights, but again you don't necessarily benefit from that if you are running locally, and I've had problems in the past getting it correctly configured on all services. With Dapr, observability is another built-in part of the runtime. It uses open standards such as OpenTelemetry and W3C tracing making it easy to integrate with existing tools.

Fifth, another aspect of security is governing which microservices are allowed to call each other. For example microservice A might be allowed to talk to microservice B but not vice versa. It can be a pain to roll your own framework for configuring something like this, and if you're not a security expert its easy to get it wrong. Service meshes can offer this kind of behaviour for a Kubernetes cluster. Dapr also can provide the same access restrictions by means of access control lists, which are easy to configure, and even work when your running in "self-hosted" mode rather than Kubernetes.

Finally, we're seeing the rise of gRPC as an alternative to HTTP based APIs for microservices, due to its higher performance, and more formalized contracts. Migrating from HTTP to gRPC in a microservices environment could be tricky, as you'd need to upgrade clients and servers at the same time, or provide a period where both protocols were exposed. Dapr again can help us with this - allowing gRPC or HTTP to be used for service to service invocation, and even allowing a HTTP caller to consume a gRPC service.

So as you can see, there's quite a lot to the "simple" task of service invocation, and Dapr gives you a very comprehensive solution out of the box. It's not perfect - I've ran into some issues where VPN settings on a corporate network interfere with Dapr's service to service invocation in self-hosted mode. But it has the potential to greatly simplify this aspect of microservice development.

Dive deeper into Dapr

Of course, we've only scratched the surface of what Dapr offers by focusing on a single "building block". We could do the same for the other building blocks. I'm very interested to keep following Dapr and seeing how it evolves. It is already a very rich and capable platform, and it's very easy to adopt incrementally if you don't want to embrace everything at once. I recommend checking out the free Dapr for .NET Developers book which is a great introduction if you're a .NET developer.


0 Comments Posted in:

"Serverless" architecture is one of the most exciting innovations to emerge in the cloud computing space in recent years. It offers several significant benefits including rapid development, automatic scaling and a cost-effective pricing model. Regular readers of my blog will know that I have been (and still am) an enthusiastic proponent of Azure Functions.

But "serverless" does entail some trade-offs. For every benefit of "serverless" there are some corresponding limitations, which may be enough to put some people off adopting it altogether. And it also can seem to be at odds with "containerized" approach to architecture, with Kubernetes having very much established itself as the premier approach to hosting cloud native applications.

I think the next stage of maturity for "serverless" is for the up-front decision of whether to use a "serverless" architecture or not to go away, and be replaced by a kind of "sliding scale", where the decision of whether to run "serverless" or not is a deploy-time decision rather than being baked in up front.

To explain what I mean, let's look at five key benefits of serverless, and how in some circumstances, they introduce limitations that we want to get around. And we'll see that we're already close to a situation where a "sliding scale" allows us to make our application more or less serverless depending on our needs.

Servers abstracted away

The first major selling point of "serverless" is that servers are abstracted away. I don't need to manage them, patch them, or even think about them. I just provide my application code and let the cloud provider worry about where it runs. This is great until I actually do care for some reason about the hardware my application is running on. Maybe I need to specify the amount of RAM, or require a GPU or an SSD. Maybe for security reasons I want to be certain that my code is not running on shared compute with other resources.

Azure Functions is already a great example of the flexibility we can have in this area. It's multiple "hosting plans" allow you to choose between a truly serverless "consumption" plan where you have minimal control of the hardware your functions are running on, all the way up to "premium" plan with dedicated servers, or containerizing your Function App and running it on hardware of your choice.

Automatic scale in and scale out

A second major attraction of serverless is that I don't need to worry about scaling in and scaling out. The platform itself detects heavy load and automatically provisions additional compute resource. This is great until I need to eliminate "cold starts" caused by scaling to zero, or need to have more fine-grained control over the maximum number of instances I want to scale out to, or want to throttle the speed of scaling in and out.

Again, we're seeing with serverless platforms an increased level of flexibility over scaling. With Azure Functions, the Premium plan allows you to keep a minimum number of instances on standby, and you can even take complete control over scaling yourself by hosting your Functions on Kubernetes and using KEDA to manage scaling.

Consumption based billing

A third key benefit of serverless is only paying for what you use. This can be particularly attractive to startups or when you have dev/test/demo deployments of your application that sit idle for much of the time. However, the consumption-based pricing model isn't necessarily the best fit for all scenarios. Some companies prefer a predictable monthly spend, and also want to ensure costs are capped (avoiding "denial of wallet" attacks). Also many cloud providers such as Azure can offer significantly reduced "reserved instance" pricing, which can make a lot of sense for a major application that has very high compute requirements.

Once again, Azure Functions sets a good example for how we can have a sliding scale. The "consumption" hosting plan is a fully serverless pricing model, whilst you can also host on a regular ("dedicated") App Service plan to get fixed and predictable monthly costs, with the "premium" plan offering a "best of both worlds" compromise between the two. And of course the fact that you can host on Kubernetes gives you even more options for controlling costs, and benefitting from reserved instance pricing.

Binding-based programming model

Another advantage associated with serverless programming models is the way that they offer very simple integrations to a variety of external systems. In Azure Functions, "bindings and triggers" greatly reduce the boilerplate code required to interact with messaging systems like Azure Service Bus, or reading and writing to Blob Storage or Cosmos DB.

But this raises some questions. Can I benefit from this programming model even if I don't want to use a serverless hosting model? And can I benefit from serverless hosting without needing to adopt a specific programming model like Azure Functions?

The answer to both questions is yes. I can run Azure Functions in a container, allowing me to benefit from its bindings without needing to host it on a serverless platform. And we are increasingly seeing "serverless" ways to host containerized workloads (for example Azure Container Instances or using Virtual Nodes on an AKS cluster). This means that if I prefer to use ASP.NET Core which isn't inherently a serverless coding model, or even if I have a legacy application that I can containerize, I can still host it on a serverless platform.

As a side note, one of the benefits of the relatively new "Dapr" distributed application runtime is the way that it makes Azure Functions-like bindings easily accessible to applications written in any language. This allows you to start buying into some "serverless" benefits from an existing application written in any framework.

Serverless databases

In serverless architectures, you typically prefer a PaaS database, rather than hosting it yourself. Azure comes with a rich choice of hosted databases including Azure SQL Database and Azure Cosmos DB. What we've also seen in recent years is a "serverless" pricing model coming to these databases, so that rather than a more traditional pricing model of paying a fixed amount for a pre-provisioned amount of database compute resource, you pay for the amount of compute you actually need, with the database capacity automatically scaling up or down as needed.

Of course this comes with many of the same trade-offs we discussed for scaling our compute resources. If your database scales to zero you have a potential cold start problem. And costs could be wildly unpredictable, especially if a bug in your software resulted in a huge query load. Again, the nice thing is that you don't have to choose up front. You could deploy dev/test instances of your application with serverless databases to minimise the costs given that they may be idle much of the time, but for your production deployment you choose to pre-provision sufficient capacity for expected loads, maybe allowing some scaling but within a much more carefully constrained minimum and maximum level.

Summary

"Serverless" does not have to be an "all-in" decision. It doesn't even need to be an "up front" decision anymore. Increasingly you can simply write code using the programming models of your choice, and decide at deployment time to what extent you want to take advantage of serverless pricing and scaling capabilities.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions

0 Comments Posted in:

One of the questions I frequently get asked by people who watch my Durable Functions Fundamentals Pluralsight course is whether you can use dependency injection with Durable Functions (as my demo app uses static methods). The answer is yes, and it's quite simple although there are a couple of considerations about logging that are worth pointing out.

In this post I'll give a quick overview of the main steps, and you can get more details on the official docs site if you'd like to dive further into the topic of dependency injection in Azure Functions.

UPDATE: I should point out that in this post I am using the in-process Azure Functions programming model using .NET Core 3.1, rather than the out-of-process .NET 5 model, because that does not support Durable Functions. In the future, the out-of-process model is going to support Durable Functions, but the out-of-process model uses a different approach for setting up dependency injection, so this tutorial does not apply for .NET 5 Azure Function apps.

Step 1 - Add NuGet references

First of all, if you've already created your Azure Function app, you need to add two NuGet references to your csproj file. These are Microsoft.Azure.Functions.Extensions and Microsoft.Extensions.DependencyInjection.

<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.1.0" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.DurableTask" Version="2.5.0" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="5.0.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.13" />

Step 2 - Register Services at Startup

The next step is to create a Startup class derived from FunctionsStartup and override the Configure method. In here you can set up whatever dependencies you need with AddSingleton or AddTransient.

The example I show below also calls AddHttpClient to register IHttpClientFactory. And you could even register a custom logger provider here if you need that.

Note that we also need to add an assembly level attribute of FunctionsStartup that points to the custom Startup class.

using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

[assembly: FunctionsStartup(typeof(DurableFunctionApp.Startup))]

namespace DurableFunctionApp
{
    public class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            builder.Services.AddHttpClient(); // registers IHttpClientFactory
            builder.Services.AddSingleton<IGreeter>(_ => new Greeter());
        }
    }
}

Step 3 - Injecting Dependencies

Injecting dependencies is very simple. Instead of defining functions as static methods on a static class, just create a regular class with a constructor that takes the dependencies and stores them as class members. These can then be used in the functions themselves which are now instance methods.

In this example I'm using the injected IGreeter in an activity function. You can use dependencies in orchestrator functions as well, but remember that the strict rules of orchestrator functions must still be adhered to.

public class MyOrchestration
{
    private readonly IGreeter greeter;

    public MyOrchestration(IGreeter greeter)
    {
        this.greeter = greeter;
    }

    [FunctionName("MyOrchestration_Hello")]
    public string SayHello([ActivityTrigger] string name, ILogger log)
    {
        log.LogInformation($"Saying hello to {name}.");
        return greeter.Greet(name);
    }

And that's all there is to it. Although I did say I'd mention a few gotchas with logging.

Gotchas - injecting loggers

When you set up dependency injection in an Azure Functions project, you might be tempted to attempt to inject an ILogger using the class constructor, rather than having an ILogger as a parameter on every function. If you do this you'll run into a few problems.

First, you can't inject the generic ILogger - that doesn't get registered by default. Instead you have to inject an ILogger<T> - so ILogger<MyFunctions> for example.

public class MyFunctions
{
    private readonly ILogger<MyFunctions> logger;
    private readonly IGreeter greeter;

    public MyFunctions(ILogger<MyFunctions> logger, IGreeter greeter)
    {
        this.logger = logger;
        this.greeter = greeter;
    }

Second, the logs that you write to that ILogger<T> will get filtered out unless you update your host.json file to include logs for your namespace. In this example we're turning logging on for the MyDurableFunctionApp namespace.

{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingExcludedTypes": "Request",
      "samplingSettings": {
        "isEnabled": true
      }
    },
    "logLevel": {
      "MyDurableFunctionApp": "Information"
    }
  }
}

And third, when you use ILogger in an orchestrator function, the best practice is to use CreateReplaySafeLogger on the IDurableOrchestrationContext. UPDATE - I initially thought that this doesn't work with an ILogger<T> but I was mistaken. The code snippet below shows how to create a replay safe logger from the injected logger.

private readonly ILogger<MyFunctions> injectedLogger; // set up in the constructor

[FunctionName("MyOrchestration")]
public static async Task<List<string>> RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
{
    var outputs = new List<string>();
    var log = context.CreateReplaySafeLogger(injectedLogger);

    log.LogInformation("about to start orchestration...");

It may be that there are some other ways round these issues, so do let me know in the comments.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.