0 Comments Posted in:

One of the really nice new features released last year in Durable Functions v2 is support for "Durable HTTP APIs".

This feature simplifies calling HTTP APIs from your orchestrator functions. As you may know, in an orchestrator function, you're not allowed to perform any non-deterministic operations, so to call a HTTP API, you'd need to call an activity function, and make the HTTP request in there. The Durable HTTP feature removes the need to create an additional activity function.

Here's an example of an orchestrator that simply makes a GET request to another site, using IDurableOrchestrationContext.CallHttpAsync without needing an activity function. The returned result object gives us access to the status code, body and headers of the HTTP response.

[FunctionName(nameof(DurableHttpOrchestrator))]
public static async Task<List<string>> DurableHttpOrchestrator(
        [OrchestrationTrigger] IDurableOrchestrationContext context, 
        ILogger log)
{
    var result = await context.CallHttpAsync(HttpMethod.Get, 
        new System.Uri("https://mysite.com/"));            
    log.LogInformation($"Completed: {result.StatusCode} {result.Content}");
}

This works really well, but what if the endpoint you are making takes a long time to return? In my tests, Durable Functions gives up waiting and throws an exception after about 90 seconds.

Fortunately, there is another really cool capability built in to making durable HTTP calls, and that is the support for async operation tracking with automatic 202 polling.

With this approach, if the API you're calling is a long-running operation, it simply returns a 202 Accepted status code, with a Location header indicating where to go to poll for progress. You can also return a Retry-After header which specifies in seconds how long to wait before polling.

When Durable Functions makes a CallHttpAsync request and gets a 202 in response, it starts polling the location in the Location header. If the operation has not yet completed, it should again receive a 202 response with the Location and optional Retry-After headers. Once the operation has completed, the polling endpoint can simply return any other status code.

All of this behaviour is completely transparent in the orchestrator code. You simply call CallHttpAsync and it will keep polling until it gets a response other than 202.

To try this out you can also use Azure Functions to implement an API that uses the 202 pattern. Obviously you wouldn't actually need to do this if your API was in the same function app as the orchestrator, but here's how to implement the pattern anyway.

The first function we need is the one that starts the long-running operation. This is called BeginOperation, and it simply takes a query string parameter called duration which is how long in seconds the overall operation will take.

I then simply construct the polling location, which assumes another endpoint on the same function app called HttpFuncProgress, and I also am explicitly adding the Retry-After header with a value of 5 seconds. In theory this is optional, but I found when testing with the local runtime that I got a NullReferenceException inside the Durable Functions extension if I missed this out. It's possibly a bug, as from the look of the code, it's supposed to fall back to a default polling interval.

[FunctionName(nameof(BeginOperation))]
public static IActionResult BeginOperation(
    [HttpTrigger(AuthorizationLevel.Function, "post", "get", Route = null)] 
    HttpRequest req, ILogger log)
{
    // work out when this long-running operation will complete
    string duration = req.Query["duration"];
    if (!Int32.TryParse(duration, out int durationSeconds)) durationSeconds = 30;
    var expire = DateTime.UtcNow.AddSeconds(durationSeconds).ToString("o");

    // construct the Uri for polling
    var location = $"{req.Scheme}://{req.Host}/api/HttpFuncProgress?expire={HttpUtility.UrlEncode(expire)}";
    log.LogInformation($"Begun operation, due to end in {durationSeconds}s, poll at {location}");

    // optional hint for how long in seconds to wait before polling again
    req.HttpContext.Response.Headers.Add("Retry-After", "5");

    // return a 202 accepted
    return new AcceptedResult(location, "Begun operation");
}

The other function we need is the one that reports progress. For this demo I have simply included the operation expiry time in the polling Uri, so that it can work out whether the long-running operation has finished or not. Obviously a real implementation would likely check in a database of some sort to see whether the operation has completed.

If the operation hasn't completed we need to return the 202 response, and again we include the Location and Retry-After headers. If the operation has completed, we return a 200 OK with OkObjectResult which can include a payload indicating the output of the operation.

[FunctionName(nameof(HttpFuncProgress))]
public static IActionResult HttpFuncProgress(
    [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] 
    HttpRequest req, ILogger log)
{
    // determine how much longer this operation requires
    string expire = req.Query["expire"];
    if (!DateTime.TryParse(expire, out DateTime expireAt))
        return new BadRequestResult();

    // return a 200 if it's finished
    if (DateTime.UtcNow > expireAt)
        return new OkObjectResult($"finished! (it's after {expireAt})");

    // we need to provide the polling Uri again, which is simply the Uri
    // on which we were called
    var location = $"{req.Scheme}://{req.Host}{req.Path}{req.QueryString}";
    var remaining = (int) (expireAt - DateTime.UtcNow).TotalSeconds;
    log.LogInformation($"{remaining} seconds to go, poll at {location}");

    // return the 202 to indicate that the caller should keep polling
    var res = new AcceptedResult(location, $"{remaining} seconds to go");
    req.HttpContext.Response.Headers.Add("Retry-After", "5");
    return res;
}

As you can see, it's relatively straightforward to implement an API that supports the polling pattern, so this is a great way to keep your orchestrations very simple even while calling potentially long-running operations.

Another nice feature of the Durable HTTP API is that you can use the Managed Identity of your Function App to automatically retrieve a bearer token to use in the requests. There's a nice example of that in this sample that calls an ARM endpoint to restart a VM in Azure.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

I'm pleased to announce that NAudio 1.10 is available on NuGet. This version doesn't contain too many features, but the most notable change is that it now adds support for .NET Core 3 (thanks to a contribution from jwosty). This allows access to ASIO, WaveIn and WaveOut in .NET Core 3.0 applications that are running on Windows.

Here are the key changes in this version:

  • #574 Support ASIO and WaveIn/WaveOut in .NET Core 3.0
  • #441 recognise MP3 Xing headers with 'Info' magic
  • #584 Fixes to WasapiOut.GetPosition
  • #515 Switched from Fake to Cake build script

Part of the impetus for releasing this version of NAudio was that I heard that a well-known speaker recently complained at a conference that this particular PR hadn't been accepted yet. I can understand his frustration, but I let me take a moment to share a few reasons why I sometimes take a long time to accept pull requests (and sometimes don't accept them at all):

  • NAudio is just a spare time project, and I have many other commitments on my time. In practice this means that I only occasionally have time to work through the PR backlog.
  • It's surprisingly common for PRs to contain bugs and often not even build. One thing that has really helped on this front is hooking up an Azure DevOps Pipeline to automatically build every PR. This gives me immediate visibility of whether a PR is ready for inspection or not. Unfortunately in this particular case, the PR didn't build on the CI server or on my local machine, and it required a few hours of experimenting to work out why.
  • The nature of NAudio means that lots of functionality cannot meaningfully be tested with unit tests. I need to run on different operating systems, bit depths, and with different sound cards, and actually listen to the audio being produced to be confident that things are still working. That means I need time for manual testing before a new release.
  • Whenever I accept a new feature into NAudio, end users expect me to support it. It's very rare for the original PR author to hang around providing help, and usually they don't create any documentation either. So I need to understand the code well enough to document and support the feature going forwards before completing the PR.

With that said, I am very grateful to everyone who has submitted PRs, bug reports, and answered questions on GitHub and Stack Overflow, and please accept my apologies if I have been slow to respond to your contribution or question.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments Posted in:

As you hopefully all know, it really is important that your website uses HTTPS. This used to involve a rather cumbersome process of buying and installing a custom SSL certificate. But with an Azure App Service hosted website with a custom domain, there are now a few free options available.

You can automate the process of getting a free certificate from Let's Encrypt, and there is also a very nice new preview feature of App Service that offers free auto-renewed SSL certificates (although unfortunately it doesn't support naked domains yet).

My favourite option at the moment however is to use Cloudflare which is remarkably easy to migrate to assuming you've already set up your domain name to point at your Azure Website. Instructions on how to do that are available here if you need help with that.

Cloudflare not only gives you a free SSL certificate, but keeps it auto-renewed, and also offers some added-value free services such as caching content to reduce the amount of traffic hitting your website.

I recently decided not to renew a paid-for SSL certificate for one of my Azure hosted sites and switch it over to the Cloudflare free offering. I ran into a couple of minor gotchas along the way, so I thought I'd take the opportunity to write up what learned.

The first step was straightforward - I onboarded the domain name at Cloudflare in the usual way, and it read my existing DNS entries, and gave me the new Cloudflare nameservers to configure for my custom domain.

However, things didn't go as smoothly as normal, with a nasty ERR_SSL_VERSION_OR_CIPHER_MISMATCH showing whenever I tried to access my site over HTTPS.

Gotcha 1 - verifying the domain name

My first attempt at fixing the issue was going to my Azure Website and removing my old certificate binding to my paid certificate. I then unbound the custom domain and attempted to rebind it to the website. However, I then ran into another issue - I couldn't verify the custom domain. The Azure Portal asks you to create a DNS entry that it uses to verify ownership of the domain. For example, it might ask you to create a CNAME with the value of mywebapp.azurewebsites.net.

It's quite simple to create the CNAME record in the Cloudflare control panel, but Azure App Service won't see it by default, if the "proxy status" is set to "Proxied". You need to set it to "DNS only", and then your ownership can be verified. You can set it back to "proxied" immediately after if you want.

image

I was able to resolve this issue thanks to this helpful answer at Stack Overflow.

Gotcha 2 - re-issue Cloudflare universal certificate

I still hadn't fixed the ERR_SSL_VERSION_OR_CIPHER_MISMATCH and that turned out to be due to my Cloudflare universal certificate not having been issued properly for some reason. Normally, in Cloudflare you will see something like this with a universal certificate for your domain name (skypevoicechanger.net in this example)

image

However, if you don't have any universal certificates showing here, then you need to go to this part of the Cloudflare control panel and select "Disable Universal SSL". Then after 10 minutes, re-enable it, and your universal certificate should get generated.

image

That got me up and running again and resolved the ERR_SSL_VERSION_OR_CIPHER_MISMATCH problem.

Getting to full (strict) encryption mode

The final step I wanted to take was to use the "Full (Strict)" encryption mode, rather than "Full".

image

Essentially what happens in "Full" mode is that traffic from Cloudfare to your host server is encrypted, but isn't too fussy about whatever certificate it receives on the host server. This works fine with App Service, because it does have an SSL certificate for mywebapp.azurewebsites.net, although obviously it doesn't have one for your custom domain.

To get the best possible security, you can ask Cloudflare to create a certificate for you (which has a long lifetime), and then install that manually on your Azure App Service Web App.

Creating the Cloudflare "Origin certificate" can be done from their control panel here:

image

By default this wwill create a certificate with a 15 year lifetime:

image

The certificate consists of two parts, one called the "PEM" and one called the key. You should copy these and save them each to a text file.

image

In theory now we should simply be able to install this certificate onto Azure App Service, but there is one final step we need to do, and that is convert it to pfx format. That's needed because that's the format Azure App Service expects the certificate in.

Gotcha 3 - creating a pfx

You can convert a PEM certificate to pfx using openssl, although if you are on Windows, that's a bit of a pain, because it's not available out of the box. However, the Windows Subsystem for Linux is ideal for this.

I opened up a Windows Terminal WSL tab, and changed directories to the folder I'd saved my PEM and key files in:

cd /mnt/c/Users/markh/Documents/

Next, I issued the following openssl command to convert from PEM to pfx. openssl is available out of the box in WSL.

openssl pkcs12 -inkey mykey.key -in mycert.pem -export -out mycert.pfx

This will prompt you for an "export passport" - so enter something secure that you'll remember.

Now we can install this certificate on our Azure Website. We go into "TLS/SSL certificates" and choose "Upload Certificate"

image

This lets us upload the pfx we just created. You'll need to enter the certificate password you just created.

image

Then in the "bindings" tab, we can add a binding for each domain name (I added both the naked domain, and the www subdomain)

image

With these bindings in place, we can now go back to Cloudflare and switch over to Full (strict) mode

image

And so now we have free SSL/TLS for our domain name with nothing additional to be done for the next 15 years (by which time hopefully the Azure App Service free certificates will be upgraded to support naked domains)!

By the way, while I was getting all this working, I found this very helpful tutorial that helped me through a few difficulties along the way. In it the author walks through many of the same steps, only using Win32OpenSSL instead of WSL.