0 Comments Posted in:

Following on from my post about what Bicep is, I wanted to provide an example of a Bicep template, and an Azure Function app seems like a good choice, as it requires us to create several resources. There's the Function App itself, but also we need a Storage Account, an App Service Plan, and an Application Insights instance.

There's a helpful example here in the official docs, but I wanted to build my own to learn more about the syntax, and to try to produce a minimal template that was customized for my needs.

Parameters and Variables

A Bicep file can have parameters, and those parameters can be given default values. For my template I wanted the user to supply an "app name" which would be the name used for the Function App. The location is also a parameter, but it defaults to the location of the resource group.

But you can also create variables based on the values of the parameters. In this example, I'm using the appName variable to generate names for the Storage Account, Hosting Plan and App Insights instance. You can see that I've used the uniqueString method to get unique names for various resources and also the substring method to keep the Storage Account name within the valid limits.

param appName string
param location string = resourceGroup().location

// storage accounts must be between 3 and 24 characters in length and use numbers and lower-case letters only
var storageAccountName = '${substring(appName,0,10)}${uniqueString(resourceGroup().id)}' 
var hostingPlanName = '${appName}${uniqueString(resourceGroup().id)}'
var appInsightsName = '${appName}${uniqueString(resourceGroup().id)}'
var functionAppName = '${appName}'

Storage Account

Next, we specify the Storage Account resource. We can give it an identifier (storageAccount) to refer to it later in the template, and Visual Studio Code extension for Bicep will give us autocomplete to greatly simplify defining this resource. On top of name and location, which are set to variable values, the only other things to set up are the Storage Account kind, and pricing tier (known as "sku" in these templates), and again the VS Code autocomplete points us in the right direction for these.

resource storageAccount 'Microsoft.Storage/[email protected]' = {
  name: storageAccountName
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_LRS'
    tier: 'Standard'

App Insights

Application Insights has always been a bit of a pain to automate the creation of, and it seems that although Bicep offers some help, this was the bit I struggled with the most.

We need to say that the kind is web for Function Apps, and I discovered after some failed attempts that you do need to provide the properties section for this to work. I simply copied the contents of properties from another template I found, but wasn't sure what to put for WorkspaceResourceId so I just left that out. That's exactly the kind of guesswork I was talking about in my previous post that I'd like to see eliminated. It should be straightforward to discover which properties are and aren't needed for each Azure resource type, what their values mean, and what the defaults are if you leave them out.

Another annoyance is that the Azure Portal likes there to be a special tag on the App Insights resource which points to the Function App it is linked to. This is a bit of a pain to set up and hopefully the Portal can be improved in the future to deduce these connections without needing you to set them up explicitly.

resource appInsights 'Microsoft.Insights/[email protected]' = {
  name: appInsightsName
  location: location
  kind: 'web'
  properties: { 
    Application_Type: 'web'
    publicNetworkAccessForIngestion: 'Enabled'
    publicNetworkAccessForQuery: 'Enabled'
  tags: {
    // circular dependency means we can't reference functionApp directly  /subscriptions/<subscriptionId>/resourceGroups/<rg-name>/providers/Microsoft.Web/sites/<appName>"
     'hidden-link:/subscriptions/${subscription().id}/resourceGroups/${resourceGroup().name}/providers/Microsoft.Web/sites/${functionAppName}': 'Resource'

App Service Plan

For the App Service Plan, we want to use the consumption plan, and this highlights the mismatch between the names that ARM gives things, and the names they are called in the official documentation. An App Service Plan is a "Server Farm" in ARM template speak, and the consumption plan is called "Dynamic" with the name "Y1".

resource hostingPlan 'Microsoft.Web/[email protected]' = {
  name: hostingPlanName
  location: location
  sku: {
    name: 'Y1' 
    tier: 'Dynamic'

Function App

Finally we have the Function App itself, and this one is perhaps the most complex. Again I borrowed some of this from another sample I found, and this means that I have included a few settings (e.g. httpsOnly and clientAffinityEnabled) that probably aren't strictly necessary for a minimal template. The serverFarmId shows a good example of how easy it is to refer to another Bicep resource (the hostingPlan).

Another thing we need for a Function App is a few "app settings". We should provide the FUNCTIONS_EXTENSION_VERSION and the FUNCTIONS_WORKER_RUNTIME to match the code for our Function App. We also need to link the Function App to the App Insights instance with APPINSIGHTS_INSTRUMENTATIONKEY, which Bicep made really easy - I loved the fact that there was intellisense helping me to pick out this property.

There are two settings (AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING) that should contain the connection string of the Storage Account. The fact that we have to go to some lengths to construct this out of various pieces is a bit of an annoyance that I complained about in my previous post. Hopefully this will be simplified in future.

However, I did want to highlight a setting I am not including in the template. Function Apps usually have a WEBSITE_CONTENTSHARE app setting, but the documentation explicitly tells you not to include this in an ARM template, because it will get auto-generated. However, several other examples I saw for Bicep or ARM Azure Function App templates did include this setting and I presume that is because these templates were generated base on an existing Azure resource. This tends to end up with verbose templates that specify more than they need to (or should in this case). For that reason I think it would be nice if VS code offered autocomplete of minimal Bicep templates for each resource type, making it easier to see what you need to provide.

I also haven't set WEBSITE_RUN_FROM_PACKAGE to 1 which is what you would do if you want to use the "run from package" mode of publishing Function Apps (which is a good choice) and were planning to deploy the zips with az functionapp deployment source config-zip. However, if you are going to use func azure functionapp publish that will take care of this app setting for you.

The final thing I want to point out is the ability to configure dependencies with dependsOn. This is an ARM capability that allows us to ensure things get created in the right order (and also supports concurrent creation of resources that don't have dependencies on each other).

resource functionApp 'Microsoft.Web/[email protected]' = {
  name: functionAppName
  location: location
  kind: 'functionapp'
  properties: {
    httpsOnly: true
    serverFarmId: hostingPlan.id
    clientAffinityEnabled: true
    siteConfig: {
      appSettings: [
          'value': appInsights.properties.InstrumentationKey
          name: 'AzureWebJobsStorage'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccount.id, storageAccount.apiVersion).keys[0].value}'
          'value': '~3'
          'name': 'FUNCTIONS_WORKER_RUNTIME'
          'value': 'dotnet'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccount.id, storageAccount.apiVersion).keys[0].value}'
        // WEBSITE_CONTENTSHARE will also be auto-generated - https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings#website_contentshare
        // WEBSITE_RUN_FROM_PACKAGE will be set to 1 by func azure functionapp publish

  dependsOn: [

Deploying it with the Azure CLI

Deploying this Bicep template couldn't be easier if you already know how to deploy a regular ARM template with the Azure CLI. The latest version of the Azure CLI supports Bicep out of the box with the same command used for ARM templates (az deployment group create).

In this example I show first creating a resource group with az group create and then deploying our template to it with az deployment group create. I also show how we can provide a parameter to the deployment (appName in our example)

$RESOURCE_GROUP = "MyResourceGroup"
$APP_NAME = "myfunctionapp"
$BICEP_FILE = "deploy.bicep"
$LOCATION = "westeurope"

# create a resource group
az group create -n $RESOURCE_GROUP -l $LOCATION

# deploy the bicep file directly
az deployment group create `
  --name mybicepdeployment `
  --resource-group $RESOURCE_GROUP `
  --template-file $BICEP_FILE `
  --parameters "appName=$APP_NAME"


A final thing to mention here is that the nice thing about Bicep is that you should only need to do this once. Now I've worked out how to create a Function App in Bicep, I can use this as a "module" in any other application that needs a Function App. This means that I can have even simpler top-level Bicep templates that express what I want in very generic terms (e.g. I need a Function App and a Cosmos DB database), and the lower-level templates which specify the details of how those are configured are not something I need to concern myself with if I'm happy to stick with the defaults I've used elsewhere.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments Posted in:

There is a huge variety of options in Azure for creating and configuring resources (e.g. creating a Function App or a Virtual Machine).

  1. Create resources manually in the Azure Portal (great for experimentation and learning)
  2. Create an ARM template (for declarative Infrastructure as Code, great for predictable deployment of production apps)
  3. Script creation of resources using the Azure CLI or Azure PowerShell (great for spinning up/tearing down dev/test resources)
  4. Use the Azure Resource Manager SDKs to create resources in your language of preference such as C# (great when your application itself needs to dynamically create Azure resources such as an Azure Container Instance)
  5. Use third party offerings such as Terraform or Pulumi (great when you need multi-cloud support or want to combine the best of Infrastructure as Code with custom logic)

So when Azure recently announced Bicep, you might be forgiven for wondering why we need yet another way to deploy resources in Azure.

What is Bicep?

Bicep is a "Domain Specific Language for deploying Azure resources declaratively". Probably the easiest way to think of it is that it's simply ARM templates with a much nicer syntax. Bicep code is transpiled to ARM templates. In other words if (like me) you avoid ARM templates in many situations because of their verbose and complex syntax, then Bicep offers a way to give you the strengths of ARM templates without their pain.

I'll do a followup post soon showing an example Bicep template for deploying an Azure Function app, but in this post I want to summarize what I like about Bicep, and also give some of my first impressions of how it can be improved going forwards.


First of all, it certainly achieves the (not difficult) goal of being less verbose and easier for a human to read and write than an ARM template. If you've not seen a Bicep template before, here's the definition for a storage account:

param storageAccountName string
param location string = resourceGroup().location

resource storageAccount 'Microsoft.Storage/[email protected]' = {
  name: storageAccountName
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_LRS'
    tier: 'Standard'

Even better than just a nicer syntax, there is an excellent Visual Studio Code extension for Bicep that gives you syntax highlighting and auto-completion. This makes it very easy to discover the names of the required properties and assign the correct values.

I also really like the fact that Bicep is supported out of the box with the latest Azure CLI (and Azure PowerShell). This means you can directly deploy a Bicep template without the need to transpile it into an ARM template. Not only can it deploy, but it can also decompile an existing ARM template into a Bicep file (with az bicep decompile). So this gives you a great way to migrate existing templates into the newer syntax.

Another feature I like is that Bicep embraces modularity. You can create a Bicep file that defines your preferred way of configuring a storage account, or virtual machine. And then another Bicep template can include that definition, only needing to override parameters it wants to change. This means that over time, as you use Bicep more, you should find you can reuse pre-existing templates rather than needing to build everything up from scratch every time.

It's also nice that in the Bicep GitHub repo, you can find a generous collection of examples which should serve as a decent starting point for most of your needs.


I was able to get an Azure Function App deployed with Bicep fairly easily (despite not knowing about the sample templates which would have saved me some time). However, the getting started experience did highlight some ways in which I think Bicep can still be improved.

First, I'd love to see the Azure Portal able to generate Bicep files instead of ARM templates. And for these to be as terse as possible, rather than the verbose overspecified monstrosities of ARM templates that the Portal currently generates. Even better, I'd love to see a mode for the Azure CLI where instead of actually creating a resource with (say az functionapp create), it emits a simple Bicep template to perform that operation, with the settings you specified parameterized. Since the Azure CLI calls the ARM APIs under the hood, I don't see why that would be too difficult.

Second, most Azure resource types have hundreds of configurable parameters, and if you look at auto-generated ARM templates you'll see values supplied for everything. This can be frustrating when you are working out which properties you do and don't need to include when defining a resource. My general preference is to only specify the properties only where I am deviating from the default or if I want to explicitly call out that a certain setting is important. I don't want my Bicep files cluttered with dozens of settings that I don't really understand but are in there because some other example template had them in. Really what is needed is a single place I can go to for excellent documentation of every property of an Azure resource type, including what its default value is if you don't specify it, and what the permitted values are. That doesn't seem to exist yet (or only in a very limited form).

Third, and this is a bugbear of mine with ARM templates in general, but the names things are called in the templates are often different to the names used in the official documentation. For example, the "consumption" "App Service Plan" is a "dynamic/Y1" "Server Farm" in ARM template-speak. This is unnecessarily confusing for people who don't know the historical names of Azure resources. I don't see why Bicep (or ARM) couldn't be extended to support aliases so that things can be referred to by their most current names.

Fourth, and this is also actually a limitation of ARM templates, but brings some unnecessary complexity into Bicep, and that is that some common operations seem to lack convenience functions. For example to get the connection string for a storage account, rather than there being a method or property I can directly reference, the whole thing has to be constructed with a bunch of string concatenation calling lower-level helpers. I don't see why Bicep (or ARM) couldn't make tasks like this much simpler.

value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(storageAccount.id, storageAccount.apiVersion).keys[0].value}'


Bicep is an excellent replacement for ARM templates. Anywhere you are using ARM templates, you should consider switching to Bicep. And although it is a huge improvement over ARM templates, I think there is still plenty of scope for Bicep to be made even easier to work with, so I look forward to seeing how the product evolves over the coming years.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments Posted in:

With Azure Blob Storage it's possible to generate a Shared Access Signature (SAS) with which you can allow a third party time limited access to read (or write) a specific file in blob storage. You can also grant access to an entire container.

I blogged several years back about how to create a SAS token to allow upload of a blob, but things have moved on since then. Not only is there a brand new Blob Storage SDK, but there is also a new way to generate SAS tokens without the need to have the storage account key.

User Delegation SAS

The "standard" way to generate a SAS token is to use the storage account key. However, this assumes that you have the storage account key. If you want to use "managed identities", which is something I recommend wherever possible as a security best practice, then your application does not have the storage key. This means we need another way to generate shared access signatures.

This technique is called a "user delegation" SAS, and it allows you to sign the signature with Azure AD credentials instead of with the storage account key.

In this post I'll show the code to generate a user delegation SAS URI with the .NET Storage SDK. And I also want to cover a few gotchas, around the lifetime of those tokens, and concerning how you can test this code running locally.

Generating a User Delegation SAS

The first step is connecting to storage using Azure AD credentials. The new Azure SDK makes this very easy with DefaultAzureCredential. This helper class basically tries a variety of techniques in order to source the credentials to access the storage account.

It first checks for environment variables, and if they are not present, it tries to use a managed identity (this is what you'd typically want to use in production if possible). But then it has a bunch of additional fallback options that are great for local development. It's able to use the credentials you logged into Visual Studio, Visual Studio Code or the Azure CLI with. So in most development environments, this should just work.

Here's how we use DefaultAzureCredential to create a BlobServiceClient

var accountName = "mystorageaccount";
var blobEndpoint = $"https://{accountName}.blob.core.windows.net";
var credential = new DefaultAzureCredential();
var blobServiceClient = new BlobServiceClient(new Uri(blobEndpoint), credential);

Now, let's create a simple test file we can grant access to:

var containerClient = blobServiceClient.GetBlobContainerClient("mycontainer");
var blobClient = containerClient.GetBlobClient("secret/secret1.txt");
if(!await blobClient.ExistsAsync())
    using var ms = new MemoryStream(Encoding.UTF8.GetBytes("This is my secret blob"));
    await blobClient.UploadAsync(ms);

Now we need to generate the shared access signature. The first step is to create a user delegation key.

Note that the key can be valid for a maximum of 7 days. You get an error if you request a longer duration.

var userDelegationKey = await blobServiceClient

Now we have the user delegation key, we can use the BlobSasBuilder and BlobUriBuilder helpers to generate a uri that can be used to access the file. Here I'm asking for 7 days access to this file. The lifetime of the SAS does not have to be the same as that of the user delegation key, but it cannot be longer. If you create a SAS URI with a longer lifetime than the user delegation key then you'll get a 403 error back.

var sasBuilder = new BlobSasBuilder()
    BlobContainerName = blobClient.BlobContainerName,
    BlobName = blobClient.Name,
    Resource = "b", // b for blob, c for container
    StartsOn = DateTimeOffset.UtcNow,
    ExpiresOn = DateTimeOffset.UtcNow.AddDays(7),

sasBuilder.SetPermissions(BlobSasPermissions.Read |

var blobUriBuilder = new BlobUriBuilder(blobClient.Uri)
    Sas = sasBuilder.ToSasQueryParameters(userDelegationKey,

var sasUri = blobUriBuilder.ToUri();

The SAS URI can then be used to download the file until either the SAS expires, or the user delegation key expires (whichever happens first).

Here's some simple code you can use to check the SAS URI you generated actually works.

var h = new HttpClient();
    var contentSas = await h.GetStringAsync(sasUri);
catch (HttpRequestException hrx)
    Console.WriteLine("FAILED TO DOWNLOAD FROM SAS: " + hrx.Message);

Testing locally

If you tried to follow along with the steps above, running locally and using DefaultAzureCredential, you may have found it doesn't work.

The first reason for this is that there is an additional step you need to do, which is to grant yourself either the "Storage Blob Data Contributor" or "Storage Blob Data Reader" role for the container you want to access. This might take you by surprise as being the "owner" of the storage account is actually not sufficient.

You can test easily enough if you have the required role with the following Azure CLI code. If you don't have the role, it will fail to check if a blob exists:

$ACCOUNT_NAME = "mystorageaccount"
$CONTAINER_NAME = "mycontainer"

# use this to test if you have the correct permissions
az storage blob exists --account-name $ACCOUNT_NAME `
                        --container-name $CONTAINER_NAME `
                        --name blob1.txt --auth-mode login

Granting ourselves the role can be automated with the Azure CLI and is a useful thing to know how to do, as you'd need to do this to grant your managed identity this role as well.

First we need to get the Azure AD object ID for ourselves. I did this by looking myself up by email address:

$EMAIL_ADDRESS = '[email protected]'
$OBJECT_ID = az ad user list --query "[?mail=='$EMAIL_ADDRESS'].objectId" -o tsv

Next we'll need the identifier of our storage account which we can get like this

$STORAGE_ID = az storage account show -n $ACCOUNT_NAME --query id -o tsv

This returns a string containing the subscription id, resource group name and storage account name. For example: /subscriptions/110417df-78bc-4d9d-96cc-f115bf626cae/resourceGroups/myresgroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount

Now we can use this to add ourselves to the "Storage Blob Data Contributor" role, scoped to this container only like this:

az role assignment create `
    --role "Storage Blob Data Contributor" `
    --assignee $OBJECT_ID `
    --scope "$STORAGE_ID/blobServices/default/containers/$CONTAINER_NAME"

There was one final gotcha that I ran into, and meant my C# code was still not working. And that was because the AzureDefaultCredential was not selecting the correct Azure AD tenant id. Fortunately, it's possible to customize the Visual Studio tenant id, which finally allowed me to generate the user delegation SAS locally.

var azureCredentialOptions = new DefaultAzureCredentialOptions();
azureCredentialOptions.VisualStudioTenantId = "2300dcff-6371-45b0-a289-3a960041603a";
var credential = new DefaultAzureCredential(azureCredentialOptions);


Managed identities are a much more secure way for your cloud resources to access Storage Accounts but they do make some tasks like generating a SAS a bit more complex. However, I've shown here how we can assign the necessary role to our local user (or managed identity), and write C# code to generate a user delegation key allowing us to generate SAS tokens without ever needing to see the Storage Account Key. You are limited to only generating SAS tokens with a maximum lifetime of 7 days with this technique, but it's not really a good security practice to generate very long-lived SAS tokens, so this limitation is forcing you in the direction of more secure coding practices.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.