0 Comments Posted in:

I was really pleased to see Durable Functions went GA yesterday, and continues to pick up some great new features, such as the ability to write orchestrator functions in JavaScript (still in preview). If you've not tried Durable Functions yet, it really is a game-changer, giving you a much better way to manage multiple functions that form a workflow, and greatly simplifies the implementation of complex workflows such as fan-out fan-in (map-reduce) and waiting for human interaction.

Sub-Orchestrations

In this post I want to highlight an interesting feature of Durable Functions called "sub-orchestrations". In Durable Functions an "orchestrator" function describes the order of the steps in your workflow, and "activity" functions are used to implement each of those steps.

With sub-orchestrations, an orchestrator function calls into another orchestrator function, allowing you to make workflows that are themselves built up of other workflows.

Why sub-orchestrations?

But why would you want to do this? When I first read about sub-orchestrations, I didn't initially think they would be a particularly important feature, but the more workflows I have built, the more benefits I can see for using them.

So here's a quick run-through of some of the reasons why I think you should consider using them once an orchestrator function grows to call more than about four or five activities.

1. Clean code

Real-world workflows consist of multiple steps, and tend to grow in complexity over time. If you trying to ensure that each of your activity functions has a "single responsibility" (which you should be), then you're likely to end up with a lot of them, resulting in a long and complex orchestrator function.

Also the strict "orchestrator function constraints" in Durable Functions, which stipulate that your orchestrator functions must be deterministic, have a tendency to increase the number of activity functions in use, as you need to create an activity function each time you need to perform any non-deterministic task such as fetching a value from a database or config.

Using sub-orchestrations allows you to logically group together smaller sections of your workflow, which makes for much easier to read and understand code than one giant function consisting of numerous activities.

Here's a very simple code example showing how an orchestrator function might run three sub-orchestrations in sequence:

public static async Task MultiStageOrchestrator(
    [OrchestrationTrigger] DurableOrchestrationContext ctx)
{
    string input = ctx.GetInput<string>();

    var output1 = await ctx.CallSubOrchestratorAsync<string>("Stage1", input);
    var output2 = await ctx.CallSubOrchestratorAsync<string>("Stage2", output1);
    await ctx.CallSubOrchestratorAsync("Stage3", output2);
}

2. Error-handling and retrying

I wrote recently about how great Durable Functions is for handling errors. It allows you to handle errors for the workflow as a whole, or for individual functions. But if you have a large and complex workflow made up of a few smaller workflows implemented as sub-orchestrations, then it becomes possible for each sub-orchestration to manage its own exception handling with any clean-up that is appropriate just for that part of the overall workflow.

Sub-orchestrations can be retried with back-offs, in exactly the same way that activity functions can. This is very powerful, as retrying a series of activities without the use of sub-orchestrations would be complex to implement.

So here we can see how the second stage in our example above could be set up to retry up to four times, with a back-off of five seconds:

var output2 = await ctx.CallSubOrchestratorWithRetryAsync<string>("Stage2", 
                         new RetryOptions(TimeSpan.FromSeconds(5),4), output1);

3. Run sub-orchestrations in parallel

Another thing you might notice if you take the trouble to break a long and complex workflow up into sub-orchestrations is that some of them could be run in parallel as they don't depend on each other's output. Just like you can run activities in parallel, implementing a fan-out fan-in pattern, you can do exactly the same with sub-orchestrations, kicking off several different sub-orchestrations (or several instances of the same sub-orchestration) and then waiting for them all to complete with Task.WhenAll.

Running orchestrations in parallel opens the door for performance boosts that would be too much of a pain to implement without the benefit of sub-orchestrations.

In this example, two sub-orchestrations ("stage1" and "stage2") are started in parallel, then we wait for both to complete, and use their outputs in a call to a third stage.

public static async Task ParallelSubOrchestrations(
    [OrchestrationTrigger] DurableOrchestrationContext ctx)
{
    string input = ctx.GetInput<string>();

    var stage1Task = ctx.CallSubOrchestratorAsync<string>("Stage1", input);
    var stage2Task = ctx.CallSubOrchestratorAsync<string>("Stage2", input);

    await Task.WhenAll(stage1Task, stage2Task);

    await ctx.CallSubOrchestratorAsync("Stage3", Tuple.Create(stage1Task.Result, stage2Task.Result));
}

4. Reuse across workflows

If you break complex workflows up into sub-orchestrations, you make find that the same sub-orchestration can be re-used by multiple different orchestrators. This eliminates duplication, or the need to make orchestrators that contain complex branching code. If there is some shared logic used by two different workflows, put it into a sub-orchestration that they can both make use of.

In this simple example, "workflow1" and "workflow2" share a common "SharedStage" orchestrator but also perform different tasks before or after.

public static async Task Workflow1(
    [OrchestrationTrigger] DurableOrchestrationContext ctx)
{
    string input = ctx.GetInput<string>();
    var output1 = await ctx.CallSubOrchestratorAsync<string>("StageA", input);
    var output2 = await ctx.CallSubOrchestratorAsync<string>("SharedStage", output1);
    await ctx.CallSubOrchestratorAsync("StageB", output2);
}

public static async Task Workflow2(
    [OrchestrationTrigger] DurableOrchestrationContext ctx)
{
    string input = ctx.GetInput<string>();
    var output1 = await ctx.CallSubOrchestratorAsync<string>("StageD", input);
    var output2 = await ctx.CallSubOrchestratorAsync<string>("SharedStage", output1);
    await ctx.CallSubOrchestratorAsync("StageE", output2);
}

5. Simplified event sourcing history

Behind the scenes, Durable Functions uses an "event sourcing" approach to storing the history of orchestrations. Every time an activity completes, the orchestrator wakes up and must "replay" through all the prior events that have happened in this orchestration to reconstruct the current state of the workflow and work out what to do next.

The longer and more complex an orchestrator is, the more event sourcing steps must be stored and replayed, making debugging a pain (if you are using breakpoints in your orchestrators), and possibly impacting performance.

However, if sub-orchestrations are used, the event sourcing history for the parent orchestration can replace the entire call to a sub-orchestration with its serialized JSON output, greatly reducing the number of overall events stored against the parent orchestration. This leads to the final benefit I want to mention.

6. Safer upgrades

One possible gotcha with Durable Functions is what happens when you upgrade your code while orchestrations are in progress. You need to be very careful if you do this as any breaking changes to your orchestrator (such as re-ordering functions, or changing the input or output format of activities) will result in things going wrong when your event sourcing history that was generated by a previous version of the orchestrator function is replayed against the new one.

Sub-orchestrations can't save you from this, but can actually provide some protection, if breaking changes can be isolated to a single sub-orchestration, which could allow a larger workflow to recover even if one of its sub-orchestration steps fails.

Summary

Durable Functions sub-orchestrations allow you to break large and complex workflows into more granular pieces, which open the door to retries, better error handling, reuse, and parallel execution. They also make for easier to read code, and could help make upgrades more reliable.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.
Vote on HN