0 Comments

Serverless architectures offer many benefits, including not having to manage and maintain servers, the ability to move quickly and rapidly create prototypes, automatic scaling to meet demand, and a pricing model where you pay only for what you use.

But serverless doesn’t dictate what sort of database you should choose. So in this post, I want to consider the database offerings available in Azure, and how well (or not) they might fit into a serverless architecture.

Azure offers us SQL Database (which is essentially SQL Server in the cloud), CosmosDb (which is their NoSql offering), and Table Storage, which is a very simplistic key value data store. And in fact, Azure is also now offering managed PostgreSQL and MySql databases as well, but I’ll mostly focus on the first three choices.

No More Server Management

The first thing to point out, is that all of these are “serverless” databases in the sense that you are not maintaining the servers yourself. You have no access to the VMs they run on, and you don’t need to install the OS or database. You don’t need to manage disks to make sure they don’t run out of space. All of that low-level stuff is hidden for you and these databases are effectively Platform as a Service offerings.

Automatic Scaling

Another classic serverless benefit – automatic scaling, also effectively comes baked into these offerings. Yes, you may have to pay more for higher numbers of “DTUs” (for SQL Database) or “RUs” (for CosmosDb), but I don’t need to worry about how this is achieved under the hood. Whether Azure is using one super powerful server or many smaller servers operating in a cluster is completely transparent to the end user.

Consumption Based Pricing

One serverless selling point that most of these databases don’t offer, is consumption based pricing. Both SQL Database and CosmosDb expect you to reserve a certain amount of “units” of processing power which you pay for whether or not you use them. SQL Database starts at a very reasonable £4 a month for the smallest database with 5 DTUs and 2GB. Whereas Cosmos Db requires us to spend at least £17 a month for the minimum of 400 RUs. So although we’re not talking huge amounts, it is something you’d need to factor in, especially if you wanted to make use of many databases for different staging and testing environments.

The exception to this rule is Azure Table Storage. Table Storage charges us for how much data we store (around £0.05 per GB) and how many transactions (£0.0003 per 10,000 transactions). So for an experimental prototype where we have a few thousand rows and a few thousand transactions per month, we are talking a very small amount. Of course, the down-side of table storage is that it is extremely primitive – don’t expect any powerful searching capabilities.

Rapid Prototyping

Another reason why serverless is popular is that it promotes rapid prototyping. With the aid of a Functions as a Service offering like Azure Functions, you can get something up and running very quickly. This means that schemaless databaseshave the upper hand over traditional relational databases, as they give you the flexibility and freedom to rapidly evolve the schema of your data without the need to run “migrations”.

So CosmosDb and Table Storage might be a more natural fit than SQL when prototyping with serverless. It is also easier to integrate them with Azure Functions, as there are built-in bindings, (although I suspect SQL Database bindings aren’t too far behind).

Direct Client Access

Finally, many serverless advocates talk about allowing the client application to talk directly to the database. This is quite scary from a security perspective – we’d want to be very sure that the security model is tightly locked down so a user can only see and modify data that they own.

Ideally, in a serverless architecture the database should also be able to send push notifications directly to the client, alerting them to new rows in tables or modifications of data from other users. I believe this is something that Google Firebase can offer.

However, as far as I’m aware, none of these Azure database offerings are particularly designed to be used in this way. So for now, you’d probably use something like Azure Functions as the gateway to the data, instead of letting your clients directly reach into the database.

Which Should I Use?

So we haven’t really got much closer to deciding which of the Azure database offerings you should use in a serverless application. And that’s because, as so often in software development, the answer is “it depends”. It depends on what your priorities are – do you need powerful querying, do you need to easily evolve your schema, is cost the all-important consideration, do you need advanced features like encryption at rest, or point-in-time restore?

Here’s a simple table I put together for a serverless app I was working recently, to help me decide between three database options. As you can see, there was no obvious winner. I opted to start with Table Storage, with a plan to migrate to one of the others if the project proved to be a success and needed more powerful querying capabilities.

SQL Database

CosmosDb

Table Storage

Querying

*** (very powerful)

** (good)

* (poor)

Schema Changes

* (difficult)

*** (trivial)

** (easy)

Cost

** (reasonable)

** (reasonable)

*** (super cheap)

Azure Functions Integration

* (not built in)

** (included)

** (included)

Tooling

*** (excellent)

** (OK)

** (OK)

Let me know in the comments what database you’re using for your serverless applications, and whether it was a good choice.

0 Comments

First off, full disclosure – I create courses for Pluralsight, so I stand to benefit if you become a customer. But since they are currently running a special promotion, I thought it might be a good time to explain why I am very glad to recommend them to you.

 

1) Seeing is a great way to learn

There are lots of ways to increase your programming skills, including books, blogs, attending conferences and hackathons, but one of the great things with Pluralsight courses is that you get to actually watch while someone uses the technology they are teaching. Sometimes with programming, seeing the finished code isn’t enough – we need to see the route taken to get there, so watching someone working step by step through a demo, explaining what they are doing as they go along is invaluable. Subscribers also get to download the source code for all demos, so you can always try to code it yourself while you watch, which I can recommend as a very effective way to cement what you are learning.

2) The catalogue is huge and diverse

Pluralsight has been around for a long time now, and multiple new courses are released most days. It means that almost whatever technology you want to learn, there will be something there for you. Whether you’re interested in the very latest SPA framework, or even if you’re stuck in legacy land working with something like WinForms, you’ll find there’s a choice of courses available.

3) Learn from recognized industry experts

One of the things that attracted me to Pluralsight in the first place was the quality of authors they had on board. Renowned conference speakers like Scott Allen, John Papa, or Dan Wahlin, as well as experts in their specialist domains like Troy Hunt on security, Julie Lerman on Entity Framework. It’s also a platform in which you discover some fantastic teachers you may not have come across before – a few whose courses I’ve particularly enjoyed recently are Elton Stoneman, Barry Luijbregts, Kevin Dockx, and Wes Higbee.

4) Paying will motivate you

In these days when so much technical content is available for free, it can be very tempting to decide that you can get by without paying anything. And to a certain extent that is possible. But by paying for a subscription not only will that mean you get a certain quality bar (which you don’t get when just randomly searching for YouTube or blog tutorials), but the very fact you are paying motivates you to learn in a way that free content doesn’t. It’s a bit like if you pay for piano lessons or gym membership – it adds that little bit extra incentive to actually do something and get some return on your investment.

5) Learning is fun, and good for your career

Finally, who wouldn’t want to improve their skills and learn new stuff? We have the privilege of working in an extremely interesting and fast-paced industry. By watching even just one course a month, you’ll pick up all kinds of helpful new techniques, which will improve you as a developer, making you more effective in your current role as well as more hireable if you are looking to move on.

So that’s my five reasons. Why not sign up and give it a try while the special offer is still on.

0 Comments

Like many .NET developers I’ve been keeping an eye on .NET Standard, but so far haven’t had much cause to use it for my own projects. My NAudio open source library is heavily dependent on lots of Windows desktop APIs, so there isn’t much incentive to port it to .NET Standard. However, another of my open source audio projects, NLayer, a fully managed MP3 decoder, is an ideal candidate. If I could create a .NET standard version of it, it would allow it to be used in .NET Core, UWP, Xamarin and Mono platforms.

The first step was to move to VS 2017 and replace the NLayer csproj file with one that would build as a .NET Standard package. The new csproj file format is delightfully simple, as it no longer requires us to specify each source file individually. I went for .NET Standard 1.3 and told it to auto-create NuGet packages for me, another nice capability of VS 2017:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netstandard1.3</TargetFramework>
    <GeneratePackageOnBuild>True</GeneratePackageOnBuild>
  </PropertyGroup>
</Project>

Then I attempted to compile. There were a few minor errors to be fixed. Thread.Sleep wasn’t available – I switched to Task.Delay instead. And Stream.Close needed to be replaced with Stream.Dispose. But those changes aside, it was relatively painless.

Next I wanted to use the new .NET Standard version of NLayer within my NLayer.NAudioSupport project. This project references both NLayer and NAudio, and is itself a .NET 3.5 library. Unfortunately, when I tried to build I was told that a .NET 3.5 project cannot reference a .NET Standard 1.3 library. Now because NAudio is .NET 3.5, it wasn’t an option to convert NLayer.NAudioSupport to .NET Standard, so I needed another solution.

I consulted the compatibility matrix which made it clear that I needed to be on at least .NET 4.6 to be able to reference a .NET Standard 1.3 project. So I changed NLayer.NAudioSupport to target .NET 4.6 and sure enough, everything compiled and worked.

However, it seemed a shame that now I was forcing a major .NET version upgrade on all users of NLayer.NAudioSupport. NAudio is used a lot by companies who lag long way behind the latest versions of .NET. So is there any way to keep support for .NET 3.5 for those who want it, in addition to supporting .NET Standard 1.3?

Well, we can multi-target .NET frameworks. This is very easily done in the new csproj syntax. Instead of a TargetFramework node, we use a TargetFrameworks node, with a semi-colon separated list of frameworks. So I just added net35.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFrameworks>netstandard1.3;net35</TargetFrameworks>
    <GeneratePackageOnBuild>True</GeneratePackageOnBuild>
  </PropertyGroup>
</Project>

Now when we build, it creates two assemblies – a .NET Standard library and a .NET 3.5 one. And the auto-generated NuGet package contains them both. But what happens if the same code won’t compile for both targets? In our case, for the .NET 3.5 build we needed to revert back to Thread.Sleep again.

We can do this by taking advantage of conditional compilation symbols, which will be NETSTANDARD1_3 or NET35 in our instance. This allows me to use the API available on the target platform:

#if NET35 
    System.Threading.Thread.Sleep(500);
#else
    System.Threading.Tasks.Task.Delay(500).Wait();
#endif

And with that, I now have a NuGet package containing versions of NLayer that can be used on a huge range of .NET platforms. If you’re the maintainer of an open source library, and you’ve been ignoring .NET Standard so far, maybe its time for another look.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

In serverless architectures, its quite common to use a file storage service like Azure Blob Storage or Amazon S3 instead of a traditional web server. One of the main attractions of this approach is that this can work out a lot cheaper, as you pay only for how much data you store and transfer, and there are no fixed monthly fees to pay.

To get started with this approach, we need to create a storage account, copy our static web content into a container, and make sure that container is marked as public.

In this post, I’ll show how that can be done with a mixture of PowerShell and the AzCopy utility.

The first task is to create ourselves a storage account

# Step 1 - get connected and pick the subscription we are working with
Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName "MySubscription"

# Step 2 - create a resource group in our preferred location
$resourceGroupName = "MyResourceGroup"
$location = "northeurope"

New-AzureRmResourceGroup -Name $resourceGroupName -Location $location

# Step 3 - create a storage account and put it into our resource group
$storageAccountName = "mytempstorage" # has to be unique
New-AzureRmStorageAccount -ResourceGroupName $resourceGroupName -AccountName $storageAccountName -Location $location -Type "Standard_ZRS"

# Step 4 - get hold of the storage key, we'll need that to call AzCopy
$storageKeys = Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName
$key = $storageKeys.value[0]

Now, we want to copy our static web content into a container in our storage account. There are PowerShell commands that will let us do this file by file. But a super easy way is to use the AzCopy utility which you need to download and install first.

Next we need to specify the source folder containing our static web content, the destination address in blob storage, and the access key for writing to that container. We need some flags as well – /S to recurse through folders, /Y to confirm we do want to overwrite and /SetContentType to make sure the MIME types of our html, javascript and css are set to sensible values instead of just application/octet-stream.

$azCopy = "C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe"
$websiteFolder = "D:\Code\MyApp\wwwroot\"
$containerName = "web"

. $azCopy /Source:$websiteFolder /Dest:https://$storageAccountName.blob.core.windows.net/$containerName/ /DestKey:$key /S /Y /SetContentType

You might think we’re done, but we do need to ensure our container is set to “blob” mode so that its blobs are publicly accessible without the need for SAS tokens. We can do this with Set-AzureStorageContainerAcl, but that command works on the “current” storage account, so first we need to call Set-AzureRmCurrentStorageAccount to specify what the current storage account is.

Set-AzureRmCurrentStorageAccount -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName
Set-AzureStorageContainerAcl -Name $containerName -Permission Blob

Now we launch our website, and we should see it running in the browser, downloading its assets directly from our blob storage container:

Start-Process -FilePath "https://$storageAccountName.blob.core.windows.net/$containerName/index.html"

The next step you’d probably want to take is to configure a custom domain to point to this container. Unfortunately, Azure Blob Storage doesn’t directly support us doing this (at least not if we want to use HTTPS), but there are a couple of workarounds. One is to use Azure Functions Proxies and the other approach is to use Azure CDN. Both will add a small additional cost, but its still a serverless “pay only for what you use” pricing model and so should still work out more cost effective than hosting with a traditional web server.

Hopefully this tutorial gives you a way to get started automating the upload of your SPA to blob storage. There are plenty of alternative ways of achieving the same thing, but you may find this to be a quick and easy way to get started with blob storage hosting of your static web content.

When you use queues, messages are read off in the order they are placed into the queue. This means that if there are 1000 messages in your queue, and now you want to send another message that is top priority, there’s no easy way to force it to the front of the queue.

The solution to this problem is to use “priority queues”. This allows high priority messages to get serviced immediately, irrespective of how many low priority messages are waiting.

There’s a few different options for how to implement priority queues in Azure Service Bus. We can choose how we partition the messages into priorities, either by using multiple queues, or multiple subscriptions on a topic. And we can also choose how we read from the queues – either with multiple concurrent listeners, or with a round-robin technique.

Sending Technique 1: Multiple Queues

A very simple way to achieve priority queues is to have two (or more) queues. One queue is for high priority messages, and the other for low priority. Whenever you send a message, you pick which queue to send it to. So this technique assumes that the code sending the message knows whether it should be high priority or not, and also knows how many priority queues there are.

In this simple code sample, we send three messages to the low priority queue and two to the high. We need two queue clients and to know the names of both queues to achieve this:

var clientHigh = QueueClient.CreateFromConnectionString(connectionString, "HighPriorityQueue");
var clientLow = QueueClient.CreateFromConnectionString(connectionString, "LowPriorityQueue");
clientLow.Send(new BrokeredMessage("Low 1"));
clientLow.Send(new BrokeredMessage("Low 2"));
clientLow.Send(new BrokeredMessage("Low 3"));
clientHigh.Send(new BrokeredMessage("High 1"));
clientHigh.Send(new BrokeredMessage("High 2"));

Sending Technique 2: One Topic with Multiple Subscriptions

An alternative approach is to make use of Azure Service Bus topics and subscriptions. With this approach, the messages are all sent to the same topic. But a piece of metadata is included with the message that can be used to partition the messages into high and low priorities.

So in this case we need a bit more setup. We’ll need a method to send messages with a Priority property attached:

void SendMessage(string body, TopicClient client, int priority)
{
    var message = new BrokeredMessage(body);
    message.Properties["Priority"] = priority;    
    client.Send(message);
}

And this allows us to send messages with priorities attached. We’ll send a few with priority 1, and a couple with priority 10:

var topicClient = TopicClient.CreateFromConnectionString(connectionString, "MyTopic");

SendMessage("Low 1", topicClient, 1);
SendMessage("Low 2", topicClient, 1);
SendMessage("Low 3", topicClient, 1);
SendMessage("High 1", topicClient, 10);
SendMessage("High 2", topicClient, 10);

But for this to work, we also need to have pre-created some subscriptions, that are set up to filter based on the Priority property. Here’s a helper method to ensure a subscription exists and has a single rule set on it:

SubscriptionClient CreateFilteredSub(string topicName, string subscriptionName, RuleDescription rule)
{
    if (!namespaceManager.SubscriptionExists(topicName, subscriptionName))
    {
        namespaceManager.CreateSubscription(topicName, subscriptionName);
    }
    var rules = namespaceManager.GetRules(topicName, subscriptionName);
    var subClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName);
    foreach (var ruleName in rules.Select(r => r.Name))
    {
        subClient.RemoveRule(ruleName);
    }
    subClient.AddRule(rule);
    return subClient;
}

Now we can use this method to create our two filtered subscriptions, one for messages whose priority is >= 5, and one for those whose priority is < 5:

var subHigh = CreateFilteredSub("MyTopic", "HighPrioritySub", new RuleDescription("High", new SqlFilter("Priority >= 5")));
var subLow = CreateFilteredSub("MyTopic", "LowPrioritySub", new RuleDescription("Low", new SqlFilter("Priority < 5")));

Note that you must take care that your filters result in every message going to one or the other of the subscriptions. It would be possible if you weren’t careful with your filter clauses to lose messages or to double-process them.

So this technique is more work to set up, but removes knowledge of how many priority queues there are from the sending code. You could partition the priorities into more subscriptions, or change the rules about how priorities were determined to use different message metadata without necessarily having to change the code that sends the messages.

Receiving Technique 1: Simultaneous Listeners

We’ve seen how to partition our messages into high and low priority queues or subscriptions, but how do we go about receiving those messages and processing them?

Well, the easiest approach by far is simply to simultaneously listen on both queues (or subscriptions). For example, one thread is listening on the high priority queue and working through that, while another thread is listening on the low priority queue. You could assign more threads, or even separate machines to service each queue, using the “competing consumer pattern”.

The advantage of this approach is that conceptually it’s very simple. The disadvantage is that if both high and low priority queues are full, we’ll be simultaneously doing some high and some low priority work. That might be fine, but if there could be database contention introduced by the low priority work, you might prefer that all high priority messages are handled first, before doing any low priority work.

Receiving Technique 2: Round Robin Listening

So the second technique is simply to check the high priority queue for messages, and if there are any, process them. Once the high priority queue is empty, check the low priority queue for a message and process it. Then go back and check the high priority queue again.

Here’s a very simplistic implementation for two QueueClients (but it would be exactly the same for two SubscriptionClients if you were using a topic)

void PollForMessages(QueueClient clientHigh, QueueClient clientLow)
{
    bool gotAMessage = false;
    do
    {
        var priorityMessage = clientHigh.Receive(gotAMessage ? TimeSpan.FromSeconds(1) : TimeSpan.FromSeconds(30));
        if (priorityMessage != null)
        {
            gotAMessage = true;
            HandleMessage(priorityMessage);
        }
        else
        {
            var secondaryMessage = clientLow.Receive(TimeSpan.FromSeconds(1));
            if (secondaryMessage == null)
            {
                gotAMessage = false;
            }
            else
            {
                HandleMessage(secondaryMessage);
                gotAMessage = true;
            }
        }
    } while (true);
}

The only complex thing here is that I’m trying to change the timeouts on calls to Receive to avoid spending too much money if both queues are empty for prolonged periods. With Azure Service Bus you pay (a very small amount) for every call you make, so checking both queues every second might get expensive.

No doubt this algorithm could be improved on, and would need to be fine tuned for the specific needs of your application, but it does show that it’s not too hard to set up listeners that guarantee to process all available high priority messages before they begin working on the low priority messages.