0 Comments Posted in:

I had the privilege of speaking about Durable Functions at the Developer South Coast user group a week ago, and I took the opportunity to update my Durable Functions e-Commerce sample app to take advantage of some new features that have recently been added to Durable Functions.

Orchestrator History

One of the great things about Durable Functions is that the history of each orchestration is stored using an "event sourcing" technique, meaning that it is possible to get a very detailed log of exactly what happened.

In particular you can discover what the input and output data of the orchestrator was, as well as the input and output data of every single activity function or sub-orchestrator that was called along the way. You can access the status of an orchestrator by calling the Get Instance Status API or using DurableOrchestrationClient.GetStatusAsync.

All this information is brilliant for both troubleshooting and auditing purposes. But it does also raise a few questions.

First, if I'm a heavy user of Durable Functions, will my Task Hub fill up with vast amounts of historical data that I no longer need or want?

Second, how can I search back through history to find any failed orchestrations, or orchestrations that are still running but should have terminated by now?

Enumerating Orchestrations

The recent Durable Functions 1.7.0 release includes features that help with both those tasks. It builds on the existing get all instances API, and adds paging capabilities, which would be essential if a large number of historical orchestrations were present.

In my Durable Functions e-Commerce sample app, I have a web-page that uses the get all instances API to show all orchestrations started in the last two hours (which works well for my talks, as I only want to show orchestrations I create during the talk). I do this with DurableOrchestration.GetStatusAsync passing in the start time, and all the orchestration statuses I'm interested in (which is all of them - this method could probably do with a simpler way of expressing that).

var statuses = await client.GetStatusAsync(DateTime.Today.AddHours(-2.0), null, 
    Enum.GetValues(typeof(OrchestrationRuntimeStatus)).Cast<OrchestrationRuntimeStatus>()
    );

Purging Orchestration History

There are several reasons why you might want to purge orchestration history. Maybe you have a strict data retention policy where you don't want to store data older than a certain age. Or maybe you just don't like the idea of your Task Hub filling up with millions of old orchestration history records that you no longer have any use for.

With Durable Functions 1.7, history can easily be purged using the new Purge Instance History API, which allows you to delete either all history for a specific orchestration, or all orchestrations that ended after a specific time. Obviously, you should take care not to purge the history of in-progress orchestrations, or you will get errors when that orchestration attempts to progress to the next step.

In my Durable Functions e-Commerce sample app, I use DurableOrchestrationClient.PurgeInstanceHistoryAsync to allow individual orchestrations to be deleted from my order management page. It's great for when I do a quick practice run before I give the talk and want to hide the resulting history from the UI.

await client.PurgeInstanceHistoryAsync(order.OrchestrationId);

Summary

It's great to see that Durable Functions continues to improve. There are loads more new features I've not mentioned, so do check out the release notes for a full run-down of what's new.

But I'm particularly pleased with these new orchestration history managing APIs. I specifically asked for them and so it's great to see the open source community jump on this and implement my suggestions. These APIs were the one missing feature that I had been waiting for before feeling ready to introduce Durable Functions into one of the products I work on, so many thanks to @gled4er, @k-miyake, @TsuyoshiUshio and everyone else who helped bring these improvements to Durable Functions.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

I'm a huge fan of the Azure CLI - I've blogged about it and created a Pluralsight course on getting started with it.

I often use the Azure CLI to quickly try out various Azure resources like Web Apps or Cosmos DB databases. After playing for a while with them, I then delete the resource group I've put them in to clean up and stop paying.

Deleting is especially important when you experiment with expensive resources like a multi-node Service Fabric or AKS cluster. Forgetting to clean up after yourself could be an expensive mistake.

Enter "Noel's grab bag of Azure CLI goodies", an awesome extension to the Azure CLI created by Noel Bundick which adds a "self-destruct" mode along with a bunch of other handy functions.

Installing the extension

To install the extension, simply follow the instructions on GitHub, and use the az extension add command pointing at the latest version (0.0.12 at the time of writing this). You can then see it in the list of installed extensions with az extension list

# to install v0.0.12:
az extension add --source https://github.com/noelbundick/azure-cli-extension-noelbundick/releases/download/v0.0.12/noelbundick-0.0.12-py2.py3-none-any.whl

# to see the list of installed extensions
az extension list -o table

There is a one-time setup action needed for self-destruct, which will create a service principal with contributor rights that is used by the logic app that implements the self-destruct action.

az self-destruct configure
# OUTPUT (no these are not my real credentials!):
# Creating a service principal with `Contributor` rights over the entire subscription
# Retrying role assignment creation: 1/36
# {
#   "client-id": "c9e0fb8e-18d2-44bd-b0bc-52056965a362",
#   "client-secret": "0dbcece7-34c5-49fe-ac2e-dbab9cb310e1",
#   "tenant-id": "fc3d0620-79f6-4b16-80b4-3b486a33514e"
# }

Using self-destruct mode

To use self-destruct mode, you simply specify the --self-destruct flag on any resource you create with az <whatever> create. A good level to set this at is a resource group so you can create multiple resources that will get deleted together.

In this example, I'm creating a resource group called experiment that will self-destruct in 30 minutes, and then putting an App Service Plan inside it so there is something to be deleted inside the group.

$resGroup = "experiment"
# can use 1d, 6h, 2h30m etc
az group create -n $resGroup -l westeurope --self-destruct 30m

# create something to get deleted
az appservice plan create -g $resGroup -n TempPlan --sku B1

Note that the extension will tag the resources you create with a self-destruct-date tag.

If we look inside our resource group, we'll see that not only is there the app service plan we created, but a Logic App. This Logic App exists solely to implement the self-destruct and is even able to delete itself when it's done which is convenient.

# see what's in the resource group (there will be logic app
az resource list -g $resGroup -o table

# Name                                                ResourceGroup    Location    Type                       Status
# --------------------------------------------------  ---------------  ----------  -------------------------  --------
# self-destruct-resourceGroups-experiment-experiment  experiment       westeurope  Microsoft.Logic/workflows
# TempPlan                                            experiment       westeurope  Microsoft.Web/serverFarms

If you want to, you can explore the Logic App in the Azure portal to see how it works Logic App

See it in action

To see what resources are scheduled for self-destruct, you can use the az self-destruct list command:

az self-destruct list -o table
# Date                        Name        ResourceGroup
# --------------------------  ----------  ---------------
# 2018-11-30 13:12:42.750344  experiment  experiment

If you've changed your mind you can disarm self-destruct mode with az self-destruct disarm or re-enable it later with a different duration using az self-destruct arm

Finally, once the timer has expired, you can check whether it worked by searching for resources in the group. If all went well, there'll be nothing to see:

az resource list -g $resGroup -o table
# Resource group 'experiment' could not be found.

Summary

The self-destruct mode extension is a great way of protecting yourself against expensive mistakes and worth considering using for all short-lived experiments. It's a superb idea, and nicely executed. The idea could be developed further, for example it could email you asking if you are still using a resource group and if you don't respond within a set period of time it deletes it, to make a sort of "dead man's switch" for Azure.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

Just a quick update of a few things I'll be doing this month that you might also be interested in...

Dec 1 - Advent of Code is starting again

Regular readers of this blog will know that I'm a big fan of the Advent of Code site, which gives you daily programming puzzles that will stretch you and improve your coding skills. For each of the last three years I've blogged my answers, and I've used it as a chance to improve my LINQ, F# and JavaScript skills. I've not picked a theme for this year's solutions yet, but hopefully I'll have time to attempt them, and if possible I'll blog about how I get on.

Dec 4 - Microsoft Connect(); Online Conference

The Connect(); 2018 event is a developer-focused event which will keep you up to date with the latest news in the world of Azure and Microsoft developer tools. I'm particularly looking forward to the "Kubernetes for the klueless" session from Brendan Burns and learning more about .NET Core 3. This year there will also be some live coding sessions streamed on Twitch as part of the event.

Bonus Extra - A few sessions from the Docker EU Conference will also be streamed live on Dec 4th and 5th. A great opportunity to catch up on the latest innovations in the world of containers.

Dec 6 - Azure Durable Functions at Developer South Coast

If you're based anywhere near the South Coast of England, then you'd be more than welcome to join me at Developer South Coast where I'll be talking about how to create serverless workflows using the superb Azure Durable Functions extension.

Dec 11 - Containers on Azure at Azure Thames Valley

I'm also very privileged to have been invited to speak at the inaugural Azure Thames Valley event in Maidenhead, talking about the various options available to you for running containers in Azure (spoiler: there are lots). I'll be joined by Richard Conway, who'll be talking about Azure Cost Efficiency. Again, I'd love to meet you at this event if you're in the London area.