0 Comments Posted in:

I am pleased to announce a new Pluralsight course I recently released, entitled "Microsoft Azure Developer: Implement Azure Functions". You may know I already have created several other Pluralsight courses on Azure Functions including Azure Functions Fundamentals and Azure Durable Functions Fundamentals, but this one is shorter, and is explicitly focused on helping you prepare for the AZ-204: Developing Solutions for Microsoft Azure exam.

In particular, this course is designed to give you a very quick overview of the essentials you need to know about Azure Functions in order to prepare you for the AZ-204 exam.

In about 40 minutes I cover a lot of ground:

  • Creating Azure Function Apps in the portal
  • Using HTTP, timer and blob triggers
  • Using Storage Queue and Cosmos DB bindings
  • Creating C# and JavaScript functions
  • Developing and testing functions in the portal, VS Code and Visual Studio
  • Creating serverless workflows with Durable Functions

And this course is part of a series of courses currently being released that together will give you the broad background knowledge you need in order to pass the AZ-204 exam. It would also be useful to anyone wanting to get a very quick introduction to the capabilities of Azure Functions, before perhaps diving into more detail with one of my other courses.

I recently passed the AZ-204 exam myself, which was a very interesting experience as it was my first ever Azure certification. Despite considering myself an expert in most topics covered by the exam, there certainly were some tricky questions, so it's definitely worth being well prepared. There is a strict NDA around the content of the exam, so I can't tell you anything about what questions to expect, but I would say that the more experience you have actually developing applications in Azure that use the technologies that are covered, the better. Watching a few Pluralsight courses and then building out a few demo apps would be an excellent way to prepare.


0 Comments Posted in:

Regular readers of this blog will know I am a huge fan of Azure Durable Functions which provide you a way to define serverless workflows in code. My Pluralsight course Azure Durable Functions Fundamentals goes into a lot more detail about them if you'd like to learn more.

But a while back, a new programming model was added to Durable Functions called "Durable Entities" (also called "stateful entities"). Each "entity" is manages some state that can be operated on by sending it messages. An entity function receives messages, and can either update the state or perform some other action.

It's an intriguing concept, similar in many ways to the actor model (with a few differences worth noting), but initially it wasn't obvious to me what sort of problems this model would be a good fit for.

In this article I want to give some of my thoughts about when Durable Entities are a good choice, and when they don't work so well.

What are they not good for?

Let's start with some problems they aren't necessarily the right choice.

1. Workflows

I don't think Durable Entities in any way replace Orchestrator functions as the go-to approach for building serverless workflows. Orchestrations are a great way of expressing a workflow that has a clear pathway through from beginning to end.

An example would be in an eCommerce application. When you receive a new order, there might be a series of steps including billing a credit card, checking stock levels, and emailing the customer. The standard Durable Functions approach of orchestrator and activity functions still works just great in these scenarios.

2. Databases

One attraction of Durable Entities is that they give us a super easy way to store state without needing a database. However, they are not a replacement for a database.

For one thing, you have minimal querying capabilities - you can look up an entity by id, and you can ask for all entities of a particular type, but that's about it. Anything you need rich and powerful querying capabilities for still belongs in a database.

Another consideration is that the entity state is deserialized and reserialized to JSON every time the entity is messaged. So you probably also want to avoid entities that manage very large amounts of state.

3. Synchronous RPC

One thing you might expect to be able to do is make remote procedure calls to your entities. In other words, you send an entity a message, it processes it, and then returns the new state to you all in a single operation.

However, this is not supported. Durable Entities follow a pattern similar to CQRS, where you can either update state (by sending a message) or query for state (requesting the current state) but not both at the same time.

Note that these rules are actually relaxed if you interact with the entity from an orchestrator function. Then you are allowed to "call" an entity where you can signal it and wait for the response in a single step.

Clearly Durable Entities aren't a one-size fits all solution, so what are they good for?

What are they good for?

Under this section I want to share a few examples (mostly not thought up by me) of where Durable Entities have been a good fit for a problem.

1. Circuit breaker

A really simple example is a "circuit breaker", which is a commonly needed pattern in cloud architectures used to temporarily pause an activity that is repeatedly failing. The circuit breaker has two states: "open" and "closed" (it's an electrical circuit analogy so "closed" actually means the switch is "on"), and if you receive too many errors you "open" the circuit to prevent any further attempts for a while. However, after a period of time, and maybe after testing to see if the issue is resolved, the circuit can "close" again to allow activity to resume. Jeff Hollan has actually written up a great article explaining how to implement the circuit breaker pattern with durable entities.

2. State machines

The circuit breaker is actually a simple example of a "state machine", which is a very helpful pattern that can be applied to many problems in software development.

But more complex state machines crop up all over the place in business processes. Here's a simple example - a Git "pull request" (PR) can move to the "completed" state only when all the following conditions have been met:

  • no reviewer has marked it as rejected or waiting for author
  • the required number of reviewers have approved
  • all comments have been marked as resolved
  • the build is passing

But these conditions don't happen in a particular order, which means that every time your Pull Request entity gets a "message" such as "new comment added", or "approved by reviewer", or "build passed", the rules for whether the PR can be "auto-completed" would need to be re-evaluated.

3. IOT offline detection

One very creative idea that the Azure Functions team shared, was of using Durable Entities to detect when an IOT device was offline. Imagine having hundreds of IoT devices such as temperature sensors, each emitting data periodically, and you want to detect if a sensor has gone offline. That meant we need to respond to the "non-event" of a sensor not reporting back for more than a certain period of time.

The solution was to create a Durable Entity for each IOT device that simply recorded the last time it had received a value. Every time it did receive a value it also sent itself a future scheduled message. When that message was received it could check the entity state to see if that sensor was still online or not.

There's a superb writeup of this approach here from Kees Schollaart, which includes a SignalR visualization.

4. Attendee counter

IoT devices and circuit breakers are examples of long-lived entities. But it's also very common to apply this model to relatively short-lived entities.

One simple example I heard (afraid can't find the link) was of using Durable Entities as an attendee counter for a conference. Every time someone stepped into the room for a conference session, a button is pressed on a mobile application, and it results in the Durable Entity for that session being signalled so it can increment the total by one. At the end of the day, each Durable Entity can be queried for the attendee count for that session.

This is arguably much simpler than having to have a running count maintained in a database, resulting in lots of unnecessary database load and the need to consider concurrency (Durable Entities process incoming messages serially, one after the other).

I've used Durable Entities for a very similar use case where an online poll is tracked by a Durable Entity and maintains the current vote count for each option every time someone votes. (I found a similar example here from Davide Guida, where each voting option is tracked by a separate entity).

5. HTTP Request bins

My friend Paco came up with another creative use for Durable Entities. He built a "HTTP request bin" application which simply remembers the most recent requests you made to any HTTP endpoint, and can be a great debugging tool. This saves the cost and hassle of creating a database to store state that isn't particularly high value and doesn't need to be retained long-term.

6. Game state

Another common use for Durable Entities is to manage game state. Each player or instance of a game could store its state as a Durable Entity, and this works well for games that can be modelled as a series of "actions" which mutate the state of game entities. Jeremy Likness has put together a great sample of this approach for a dungeon game.

7. Online Test

One usage I've been experimenting with is for an online test website I created. It had a database of thousands of multiple choice questions, each divided up into subjects. When the student decides to take a test, 20 questions are randomly selected, and the answer choices are shuffled into a random order. They then have 15 minutes to complete the test.

Essentially what this means is that for every test started I need to track some state - the time it was started, the questions in the test, as well as remembering which of the shuffled answers was the correct one. Of course this can be done with another table in my database, but this is very much transient data. Many people start tests and never bother finishing them, and so it's nice to be able to track in-progress tests without cluttering up my main database.

A similar use-case is found in an article by Laur about using Durable Entities to implement a shopping basket. The similarity is that many website users might put items into the shopping basket but never actually bother to complete the purchase.

Summary

Durable Entities is a unique programming model that opens the door to creative solutions for problems you might previously have tackled in other ways. Hopefully by sharing ideas of what other people are doing I've given you some inspiration for what situations you could apply Durable Entities to.

Of course I'd love to hear about other ways people have put Durable Entities to good use, so please let me know in the comments how you've got on with them.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

In my recent Pluralsight course "Versioning and Evolving Microservices with ASP.NET Core", I built my demos on top on an existing microservices application that had been created as the basis for a Microservices in ASP.NET Core learning path on Pluralsight (which is nearing completion).

For messaging between microservices, this application used Rebus which is a very simple service bus implementation in .NET that allows you to plug in a few different services as a back-end. For example, in Azure you could use Azure Service Bus, Azure Storage Queues or Azure SQL Database, and many other "transports" are available.

In this post, I'll go through the basics of setting up Rebus in ASP.NET Core, where we are going to have one microservice send a message, and another handle those messages.

I'm going to use Azure Storage Queues as the transport as they are (1) very simple and cheap, and (2) have an emulator you can use for local development. And of course by using an abstraction, we are free to change later to a different underlying transport.

Sending messages

In the project that will send messages using Rebus, we'll add references to following NuGet packages in our csproj file:

<PackageReference Include="Rebus" Version="6.4.1" />
<PackageReference Include="Rebus.AzureQueues" Version="1.0.0" />
<PackageReference Include="Rebus.Microsoft.Extensions.Logging" Version="2.0.0" />
<PackageReference Include="Rebus.ServiceProvider" Version="5.0.6" />

Next, in Startup.ConfigureServices we'll use the AddRebus method to set up Rebus. I'm configuring it to integrate with the ASP.NET Core logging. I'm using UseAzureStorageQueuesAsOneWayClient for the transport as this project will only need to send messages, rather than receive them.

I need to pass in a CloudStorageAccount which in development can use the connection string of the Azure Storage Emulator (UseDevelopmentStorage=true).

And I also need to tell it what queues to route different message types to. In my simple example I have one message called VoteMessage, and I'll send that to a queue name I read out of configuration.

var storageAccount = CloudStorageAccount.Parse(Configuration.GetConnectionString("AzureQueues"));

// Configure and register Rebus
services.AddRebus((configure, provider) => configure
    .Logging(l => l.MicrosoftExtensionsLogging(provider.GetRequiredService<ILoggerFactory>()))
    .Transport(t => t.UseAzureStorageQueuesAsOneWayClient(storageAccount))
    .Routing(r => r.TypeBased().Map<VoteMessage>(Configuration["AzureQueues:QueueName"]))
    );

Also in my Startup.Configure method I added the following call to UseRebus:

app.ApplicationServices.UseRebus();

Finally, in my controller or Razor page that wants to send the message, I simply take a dependency on Rebus.Bus.IBus, and call the Send method. Rebus knows what queue to send it to based on the type of message I'm sending.

await bus.Send(new VoteMessage()
{
    PollId = Poll.Id,
    Option = option,
    VoterId = voterId
});

Handling messages

In the ASP.NET Core application that handles the messages, the steps are similar. In my very simple example, I'm putting the handlers into an existing ASP.NET Core web API application, although in a production app, I'd want to separate this concern and have a dedicated worker process.

The steps are almost identical. We're referencing the same four NuGet packages. And our Startup.ConfigureServices method looks like this.

services.AddRebus((configure, provider) => configure
    .Logging(l => l.MicrosoftExtensionsLogging(provider.GetRequiredService<ILoggerFactory>()))
    .Transport(t => t.UseAzureStorageQueues(storageAccount, Configuration["AzureQueues:QueueName"])));

Note that we don't need to set up any routing because this project isn't sending any messages. But we do need to tell it which queue to listen for messages on which is provided as an argument to the UseAzureStorageQueues method.

Rebus also needs to know how to create the appropriate handler for each message type. The easiest way to do this is to tell it to scan an assembly for all handlers with a call to AutoRegisterHandlersFromAssemblyOf. This will register every class in the assembly that implements IHandleMessages<T>. It also means that your handler classes can take dependencies on anything that can be resolved from the ASP.NET Core DI container.

services.AutoRegisterHandlersFromAssemblyOf<VoteMessageHandler>();

Again in Startup.Configure method we need to call UseRebus. Note that this will actually attempt to start listening on the queue, so if your Storage Queue is inaccessible or your configuration is wrong, you may get an error here (another reason not to put message handlers in the same project as APIs).

app.ApplicationServices.UseRebus();

Finally, we need a message handler for our VoteMessage, which is achieved by implementing IHandleMessages<VoteMessage> and implementing the Handle method. As you can see it's very straightforward:

public class VoteMessageHandler : IHandleMessages<VoteMessage>
{
    private readonly VoteDbContext dbContext;
    private readonly ILogger<VoteMessageHandler> logger;

    public VoteMessageHandler(VoteDbContext dbContext, ILogger<VoteMessageHandler> logger)
    {
        this.dbContext = dbContext;
        this.logger = logger;
    }
    public async Task Handle(VoteMessage message)
    {
        // save to the database
    }
}

Summary

Obviously Rebus has a lot more capabilities than I'm showing here. But the nice thing about it is that there is a very low barrier of entry and you can start simple, adding basic messaging with swappable transports to your ASP.NET Core microservices with minimal effort.