0 Comments Posted in:

In this post we'll explore how we can use the Azure CLI to deploy an Azure Function App running on the "consumption plan" along with all the associated resources such as a Storage Account and an Application Insights instance.

I'll be using PowerShell as my command prompt, but most of these commands translate very straightforwardly to a Bash shell if you prefer.

Step 1 - Create a Resource Group

As always with the Azure CLI, once we've logged in (with az login) and chosen the correct subscription (with az account set -s "MySub"), we should create a resource group to hold the various resources we're going to create.

$resourceGroup = "AzureFunctionsDemo"
$location = "westeurope"
az group create -n $resourceGroup -l $location

Step 2 - Create a Storage Account

A number of features of Azure Functions work with a Storage Account, so it's a good idea to create a dedicated Storage Account to partner with a function app. Storage Accounts do require unique names as it will form part of their domain name, so I'm using a random number to help pick a suitable name, before creating the storage account using the standard LRS pricing tier.

$rand = Get-Random -Minimum 10000 -Maximum 99999
$storageAccountName = "funcsdemo$rand"

az storage account create `
  -n $storageAccountName `
  -l $location `
  -g $resourceGroup `
  --sku Standard_LRS

Step 3 - Create a Function App

Normally at this point we'd need to create an App Service Plan, but when we're using the consumption pricing tier there's a shortcut we can use, which is to set the --consumption-plan-location parameter when we create the Function App, and we'll automatically get a consumption App Service Plan created for us (with a name like "WestEuropePlan") in our resource group.

We're going to be using V2 of the Azure Functions Runtime, and so I'll specify that I'm using the dotnet runtime, but you can also set this to node or java.

$functionAppName = "funcs-demo-$rand"

az functionapp create `
  -n $functionAppName `
  --storage-account $storageAccountName `
  --consumption-plan-location $location `
  --runtime dotnet `
  -g $resourceGroup

Step 4 - Deploy our Function App Code

Obviously I'm assuming that we have some functions to deploy to the Function App. If we've created a C# Azure Functions project, we can package it up for release by running a dotnet publish, zipping up the resulting folder, and using az functionapp deployment source config-zip to deploy it.

# publish the code
dotnet publish -c Release
$publishFolder = "FunctionsDemo/bin/Release/netcoreapp2.1/publish"

# create the zip
$publishZip = "publish.zip"
if(Test-path $publishZip) {Remove-item $publishZip}
Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::CreateFromDirectory($publishFolder, $publishZip)

# deploy the zipped package
az functionapp deployment source config-zip `
 -g $resourceGroup -n $functionAppName --src $publishZip

Step 5 - Configure Application Insights

Azure Functions offers excellent monitoring via Application Insights, so it makes sense to turn this on for all deployments. Unfortunately, the Azure CLI currently does not support creating Application Insights directly, so we have to jump through a few hoops.

We'll use the az resource create command to create an App Insights instance, and since it's rather tricky to successfully pass correctly escaped JSON as a parameter in PowerShell, I'm creating a temporary JSON file:

$propsFile = "props.json"
'{"Application_Type":"web"}' | Out-File $propsFile
$appInsightsName = "funcsmsi$rand"
az resource create `
    -g $resourceGroup -n $appInsightsName `
    --resource-type "Microsoft.Insights/components" `
    --properties "@$propsFile"
Remove-Item $propsFile

Now we've created the Application Insights instance, we need to get hold of the instrumentation key, which we can do with this command:

$appInsightsKey = az resource show -g $resourceGroup -n $appInsightsName `
    --resource-type "Microsoft.Insights/components" `
    --query "properties.InstrumentationKey" -o tsv

And finally, we set the instrumentation key as an application setting on our Function App with the az functionapp config appsettings set command:

az functionapp config appsettings set -n $functionAppName -g $resourceGroup `
    --settings "APPINSIGHTS_INSTRUMENTATIONKEY=$appInsightsKey"

Step 6 - Configure Application Settings

Optionally at this point, we may wish to configure some application settings, such as connection strings to other services. These can be configured with the same az functionapp config appsettings set command we just used (although watch out for some nasty escaping gotchas if your setting values contain certain characters).

az functionapp config appsettings set -n $functionAppName -g $resourceGroup `
    --settings "MySetting1=Hello" "MySetting2=World"

Step 7 - Configure a Daily Use Quota

Another optional feature you might want to consider setting up is a daily usage quota. One of the great things about the serverless Azure Functions consumption plan is that it offers near-infinite scale to handle huge spikes in load. But that does also leave you open to a "denial of wallet attack" where due to an external DoS attack or a coding mistake, you end up with a huge bill because your function app scaled out to hundreds of instances. The daily quota allows you to set a limit in terms of "Gigabyte seconds" (GB-s), which you might want to set just to be on the safe side when you're experimenting. For a production system, I'd probably rather leave this quota off (or set very high), and configure alerts instead to tell me when my usage is much higher than normal.

Here's the command that sets the daily usage quota to 50000 GB-s:

az functionapp update -g $resourceGroup -n $functionAppName `
    --set dailyMemoryTimeQuota=50000

Summary

The Azure CLI provides us with an easy way to deploy and manage our Azure Function apps. Of course, you can also create an ARM template that contains the same resources, and deploy that with the CLI. Personally I find the CLI great when I'm experimenting and prototyping, and when I've got an application that's a bit more stable and ready for production, I might create an ARM template to allow deploying the whole thing in one go.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions

0 Comments Posted in:

A year ago I rewrote this blog in ASP.NET Core. One of the goals I had was to be able to transition to writing all my posts in Markdown as I wanted to get away from relying on the obsolete Windows Live Writer and simply use VS Code for editing posts.

However, I needed to be able to store each blog post as a Markdown file, and for that I decided to use "YAML front matter" as a way to store metadata such as the post title and categories.

So a the contents of a typical blog post file look something like:

---
title: Welcome!
categories: [ASP.NET Core, C#]
---
Welcome to my new blog! I built it with:

- C#
- ASP.NET Core
- StackOverflow

Parsing YAML Front Matter with YamlDotNet

First of all, to parse the YAML front matter, I used the YamlDotNet NuGet package. It's a little bit fiddly, but you can use the Parser to find the front matter (it comes after a StreamStart and a DocumentStart), and then use an IDeserializer to deserialize the YAML into a suitable class with properties matching the YAML. In my case, the Post class supports setting many properties including the post title, categories, publication date, and even a list of comments, but for my blog I keep things simple and usually only set the title and categories (I use a file name convention to indicate the publication date).

using YamlDotNet.Serialization;
using YamlDotNet.Serialization.NamingConventions;
using YamlDotNet.Core;
using YamlDotNet.Core.Events;

// ...
var yamlDeserializer = new DeserializerBuilder()
                    .WithNamingConvention(new CamelCaseNamingConvention())
                    .Build();

var text = File.ReadAllText(blogPostMarkdownFile);
using (var input = new StringReader(text))
{
    var parser = new Parser(input);
    parser.Expect<StreamStart>();
    parser.Expect<DocumentStart>();
    var post = yamlDeserializer.Deserialize<Post>(parser);
    parser.Expect<DocumentEnd>();
}

Rendering HTML with MarkDig

To convert the Markdown into HTML, I used the superb MarkDig library. This not only makes it super easy to convert basic Markdown to HTML, but supports several useful extensions. The library author, Alexandre Mutel, is very responsive to pull requests, so I was able to contribute a couple of minor improvements myself to add some features I wanted.

I created a basic MarkdownRenderer class that renders Markdown using the settings I want for my blog. A couple of things of note. First of all, you'll notice in CreateMarkdownPipeline that I've enabled a bunch of helpful extensions that are available out of the box. These gave me pretty much all the support I needed for things like syntax highlighting, tables, embedded YouTube videos, etc. I'm telling it to expect YAML front matter, so I don't need to strip off the YAML before passing it to the renderer. I needed to add a missing mime type, so I've shown how that can be done, even though it's included now. And the most hacky thing I needed to do was to ensure that generated tables had a specific class I wanted to be present for my CSS styling to work properly (I guess there may be an easier way to achieve this now).

Once the MarkdownPipeline has been constructed, we use a MarkdownParser in conjunction with a HtmlRenderer to parse the Markdown and then render it as HTML. One of the features I contributed to MarkDig was the ability to turn relative links into absolute ones. This is needed for my RSS feed, which needs to use absolute links, while my posts just use relative ones.

Here's the code for my MarkdownRenderer which you can adapt for your own needs:

using Markdig;
using Markdig.Syntax;
using Markdig.Renderers.Html;
using Markdig.Extensions.MediaLinks;
using Markdig.Parsers;
using Markdig.Renderers;

// ...

public class MarkdownRenderer
{
    private readonly MarkdownPipeline pipeline;
    public MarkdownRenderer()
    {
        pipeline = CreateMarkdownPipeline();
    }

    public string Render(string markdown, bool absolute)
    {
        var writer = new StringWriter();
        var renderer = new HtmlRenderer(writer);
        if(absolute) renderer.BaseUrl = new Uri("https://markheath.net");
        pipeline.Setup(renderer);

        var document = MarkdownParser.Parse(markdown, pipeline);
        renderer.Render(document);
        writer.Flush();

        return writer.ToString();
    }

    private static MarkdownPipeline CreateMarkdownPipeline()
    {
        var builder = new MarkdownPipelineBuilder()
            .UseYamlFrontMatter()
            .UseCustomContainers()
            .UseEmphasisExtras()
            .UseGridTables()
            .UseMediaLinks()
            .UsePipeTables()
            .UseGenericAttributes(); // Must be last as it is one parser that is modifying other parsers

        var me = builder.Extensions.OfType<MediaLinkExtension>().Single();
        me.Options.ExtensionToMimeType[".mp3"] = "audio/mpeg"; // was missing (should be in the latest version now though)
        builder.DocumentProcessed += document => {
            foreach(var node in document.Descendants())
            {
                if (node is Markdig.Syntax.Block)
                {
                    if (node is Markdig.Extensions.Tables.Table)
                    {
                        node.GetAttributes().AddClass("md-table");
                    }
                }
            }
        };
        return builder.Build();
    }
}

0 Comments Posted in:

Just over a year ago, a new .NET SDK for Azure Service Bus was released. This replaces the old WindowsAzure.ServiceBus NuGet package with the Microsoft.Azure.ServiceBus NuGet package.

You're not forced to change over to the new SDK if you don't want to. The old one still works just fine, and even continues to get updates. However, there are some benefits to switching over, so in this post I'll highlight the key differences and some potential gotchas to take into account if you do want to make the switch.

Benefits of the new SDK

First of all, why did we even need a new SDK? Well, the old one supported .NET 4.6 only, while the new one is .NET Standard 2.0 compatible, making it usable cross-platform in .NET core applications. It's also open source, available at https://github.com/Azure/azure-service-bus-dotnet, meaning you can easily examine the code, submit issues and pull requests.

It has a plugin architecture, supporting custom plugins for things like message compression or attachments. There are a few useful plugins already available. We encrypt all our messages with Azure Key Vault before sending them to Service Bus, so I'm looking forward to using the plugin architecture to simplify that code.

On top of that, the API has generally been cleaned up and improved, and its very much the future of the Azure Service Bus SDK.

Default transport type

One of the first gotchas I ran into was that there is a new default "transport type". The old SDK by default used what it called "NetMessaging", a proprietary Azure Service Bus protocol, even though the recommended option was the industry standard AMQP.

The new SDK however defaults to AMQP over port 5671. This was blocked by my work firewall, so I had to switch to the other option of AMQP over WebSockets which uses port 443. If you need to configure this option, append ;TransportType=AmqpWebSockets to the end of your connection string.

One unfortunate side-effect of this switch from the NetMessaging protocol to AMQP is the performance of batching. I blogged a while back about the dramatic speed improvements available by sending and receiving messages in batches. Whilst sending batches of messages with AMQP seems to have similar performance, when you attempt to receive batches, with AMQP you may get batches significantly smaller than the batch size you request, which slows things down considerably. The explanation for this is here, and the issue can be mitigated somewhat by setting the MessageReceiver.PrefetchCount property to a suitably large value.

Here's some simple code you can use to check out the performance of batch sending/receiving with the new SQK. It also shows off the basic operation of the MessageSender and MessageReceiver classes in the new SDK, along with the ManagementClient which allows us to create and delete queues.

string connectionString = // your connection string - remember to add ;TransportType=AmqpWebSockets if port 5671 is blocked
const string queueName = "MarkHeathTestQueue";

// PART 1 - CREATE THE QUEUE
var managementClient = new ManagementClient(connectionString);

if (await managementClient.QueueExistsAsync(queueName))
{
    // ensure we start the test with an empty queue
    await managementClient.DeleteQueueAsync(queueName);
}
await managementClient.CreateQueueAsync(queueName);

// PART 2 - SEND A BATCH OF MESSAGES
const int messages = 1000;
var stopwatch = new Stopwatch();

var client = new QueueClient(connectionString, queueName);

stopwatch.Start();

await client.SendAsync(Enumerable.Range(0, messages).Select(n =>
{
    var body = $"Hello World, this is message {n}";
    var message = new Message(Encoding.UTF8.GetBytes(body));
    message.UserProperties["From"] = "Mark Heath";
    return message;
}).ToList());

Console.WriteLine($"{stopwatch.ElapsedMilliseconds}ms to send {messages} messages");
stopwatch.Reset();

// PART 3 - RECEIVE MESSAGES
stopwatch.Start();
int received = 0;
var receiver = new MessageReceiver(connectionString, queueName);
receiver.PrefetchCount = 1000; // https://github.com/Azure/azure-service-bus-dotnet/issues/441
while (received < messages)
{
    // unlike the old SDK which picked up the whole thing in 1 batch, this will typically pick up batches in the size range 50-200
    var rx = (await receiver.ReceiveAsync(messages, TimeSpan.FromSeconds(5)))?.ToList();
    Console.WriteLine($"Received a batch of {rx.Count}");
    if (rx?.Count > 0)
    {
        // complete a batch of messages using their lock tokens
        await receiver.CompleteAsync(rx.Select(m => m.SystemProperties.LockToken));
        received += rx.Count;
    }
}

Console.WriteLine($"{stopwatch.ElapsedMilliseconds}ms to receive {received} messages");

Management Client

Another change in the new SDK is that instead of the old NamespaceManager, we have ManagementClient. Many of the method names are the same or very similar so it isn't too hard to port code over.

One gotcha I ran into is that DeleteQueueAsync (and the equivalent topic and subscription methods) now throw MessagingEntityNotFoundException if you try to delete something that doesn't exist.

BrokeredMessage replaced by Message

The old SDK used a class called BrokeredMessage to represent a message, whereas now it's just Message.

It's had a bit of a reorganize, so things like DeliveryCount and LockToken are now found in Message.SystemProperties. Custom message metadata is stored in UserProperties instead of Properties. Also, instead of providing the message body as a Stream, it is now a byte[], which makes more sense.

Another significant change is that BrokeredMessage used to have convenience methods like CompleteAsync, AbandonAsync, RenewLockAsync and DeadLetterAsync. You now need to make use of the ClientEntity to perform these actions (with the exception of RenewLockAsync to be discussed shortly).

ClientEntity changes

The new SDK retains the concept of a base ClientEntity which has derived classes such as QueueClient, TopicClient, SubscriptionClient etc. It's here that you'll find the CompleteAsync, AbandonAsync, and DeadLetterAsync methods, but one conspicuous by its absence is RenewLockAsync.

This means that if you're using QueueClient.RegisterMessageHandler (previously called QueueClient.OnMessage) or similar to handle messages, you don't have a way of renewing the lock for longer than the MaxAutoRenewDuration duration specified in MessageHandlerOptions (which used to be called OnMessageOptions.AutoRenewTimeout). I know that is a little bit of an edge case, but we were relying on being able to call BrokeredMessage.RenewLockAsync in a few places to extend the timeout further. With the new SDK, the ability to renew a lock is only available if you are using MessageReceiver, which has a RenewLockAsync method.

A few other minor changes that required a bit of code re-organization were the fact that old Close methods are now CloseAsync, meaning that it is trickier to use the Dispose pattern. There is no longer a ClientEntity.Abort method - presumably you now just call CloseAsync to shut down the message handling pump. And when you create MessageHandlerOptions you are required to provide an exception handler.

Summary

The new Azure Service Bus SDK offers lots of improvements over the old one, and the transition isn't too difficult, but there are a few gotchas to be aware of and I've highlighted some of the ones that I ran into. Hopefully this will be of use to you if you're planning to upgrade.