0 Comments Posted in:

New .csproj format

One really nice thing about the .NET Core tooling is the new csproj file format, which is a lot less verbose.

For example, if you create a brand new C# library project, targeting .NET Standard 2.0, then here's the entirety of the .csproj file:

<Project Sdk="Microsoft.NET.Sdk">

It doesn't even require you to specify all the C# files to be compiled - it just assumes that all of them are wanted, which cuts down on a lot of noise.

Creating a NuGet package on build

If you want to create a NuGet package for your library, there's no need for a separate .nuspec file - you can just enable GeneratePackageOnBuild and define any metadata. The Visual Studio project properties dialog simplifies setting this up.

    <Authors>Mark Heath</Authors>


Another thing you can do, is multi-target more than one framework, which can be useful when you're creating a NuGet package intended to be used on several different versions of .NET. Here's how a project can target both .NET Standard 2.0 and .NET 4.6.2:


If we build this now, we'll get a NuGet package containing separately built DLLs for both targets. By the way, I'm using the excellent NuGet Package Explorer utility here to look inside my .nupkg file.

NuGet Multi-Target

Conditional References

One issue you might run into is that you need to reference different assemblies or NuGet packages depending on which target you're building for.

For example, it we add the following method to our library, it will successfully compile the .NET Standard 2.0 target, but the .NET 4.6.2 target will fail because it can't find the definition of HttpUtility.

public string JavaScriptEncode(string input)
    return HttpUtility.JavaScriptStringEncode(input);

To fix this we need to add a reference to the System.Web assembly, but it's only needed for the .NET 4.6.2 target. To do that, we simply add a conditional reference to our .csproj file using the following syntax:

<ItemGroup Condition=" '$(TargetFramework)' == 'net462' ">
    <Reference Include="System.Web" />

Unfortunately, there doesn't seem to be any way of getting Visual Studio to auto-generate this for you, and the conditional syntax can be hard to remember, which is why I've documented it here so I can find it again in the future next time I forget it!

If it was a NuGet package you wanted to conditionally reference, that's done in the same way, just using PackageReference instead. I'm also showing here how you can use multiple conditions, so we'll reference this whether we're targeting .NET 4.6.2 or 3.5:

<ItemGroup Condition=" '$(TargetFramework)' == 'net462' or '$(TargetFramework)' == 'net35'">
    <PackageReference Include="NAudio" Version="1.8.5" />

Conditional Compile

Sometimes you might want to exclude some C# files from being compiled for a certain target. For example, here's how in NAudio for the .NET 3.5 target, I'm referencing the System.Windows.Forms assembly, and excluding three specific files from being compiled:

<ItemGroup Condition=" '$(TargetFramework)' == 'net35' ">
<Reference Include="System.Windows.Forms" />
<Compile Remove="Wave\WaveOutputs\WasapiOutRT.cs" />
<Compile Remove="Wave\WaveInputs\WasapiCaptureRT.cs" />
<Compile Remove="Wave\WaveOutputs\WaveFileWriterRT.cs" />


The new .csproj file format makes it really easy to develop NuGet packages that multi-target several frameworks, but you will likely occasionally need to use the conditional syntax to make this work.

0 Comments Posted in:

Recently, I've been posting tutorials about how to deploy Azure Function Apps with the Azure CLI and create a managed identity to enable your Function App to access Key Vault. I love how easy the Azure CLI makes it to quickly deploy and configure infrastructure in Azure.

But is the Azure CLI is the right tool for the job? After all, aren't we supposed to be using ARM templates? If you've not used them before, ARM templates are simply JSON files describing your infrastructure which can be deployed with a single command.

My general recommendation is that while the Azure CLI is great for experimenting and prototyping, once you're ready to push to production, it would be a good idea to create ARM templates and use them instead.

However, in November, an interesting tweet caught my eye. Pascal Naber wrote a blog post making the case that ARM is unnecessarily complex compared to just using the Azure CLI. And I have to admit, I have some sympathy with this point of view. In the article he shows a 200+ line ARM template and contrasts it with about 10 lines of Azure CLI to achieve the same result.

So in this article I want to give my thoughts on the merits of the two different approaches: ARM templates which are a very declarative way of expressing your infrastructure (i.e. what should be deployed), versus Azure CLI scripts which represent a more imperative approach (i.e. how it should be deployed).

Infrastructure as Code

The term "infrastructure as code" is used to express the idea that your infrastructure deployment should be automated and repeatable, amd the "code" that defines your infrastructure should be stored in version control. This makes a lot of sense: you don't want error-prone manual processes to be involved in the deployment of your application, and you want to be sure that if all your infrastructure was torn down, you could easily recreate exactly the same environment.

But "infrastructure as code" doesn't dictate what file format or DSL our infrastructure should be defined in. The most common approaches are JSON (used by ARM templates) and YAML (used by Kubernetes). Interestingly, Service Fabric Mesh has introduced a YAML format that gets converted behind the scenes into an ARM template, presumably because the YAML allows a simpler way of expressing the makeup of the application (we'll come back to this idea later).

However, there's no obvious reason why a PowerShell or Bash script couldn't also count as "infrastructure as code", or even an application written in JavaScript or C#. And thanks to the Azure CLI, Azure PowerShell, Azure SDK for .NET and Azure Node SDK, you can easily use any of those options to automate deployments.

The key difference is not whether both approaches count as "infrastructure as code", but the idea that declarative ways of defining the infrastructure are better than imperative. A JSON document contains no logic - it simply expresses all the "resources" that form the infrastructure and their configuration. Whereas if we choose to write a script using the Azure CLI, then it is inherently imperative - it describes the steps required to provision the infrastructure.

So which is best?


Well the received wisdom is definitely that declarative is best. Azure strongly encourages the you to use JSON based ARM templates, Service Fabric Mesh and Docker use YAML, and other popular Infrastructure as code services like Terraform have their own file-format designed to be more a more-readable alternative to JSON.

In most cases, you are simply defining the "resources" that form your infrastructure - e.g. I want a SQL Server, a Storage Account, an App Service Plan and a Function App. You also get to specify all the properties: what location should the resources be, what pricing tier/sizing do I want, what special configuration settings do I need to enable? Most of these formats also allow you to include the application code itself as a configuration property: you can specify what version of a Docker image your Web App should run, or what GitHub repository the source code for your Function App can be found in, allowing a fully-working application to be deployed with a single command.

There are several key benefits to the declarative approach. First of all, it uses a desired state approach, which allows for incremental and idempotent deployments. In other words, your template defines what resources you want to be present, and so the act of deploying that template will only take effect if those resources are not already present, or are not in the state you requested. This means that deploying an ARM template is idempotent - there is no danger in deploying it twice - you won't end up with double of everything, or errors on the second run-through.

There are some other nice benefits to declarative template files. They can be validated in advance of running, greatly reducing the chance that you could end up with a half-complete deployment. The underlying deployment engine can intelligently optimize by identifying which resources are needed first and what steps can be performed in parallel. Any logic to retry actions in the case of transient failures is also built into the template deployment engine. And templates can be parameterized, allowing you to use the same template to deploy to staging as well as production. Parameters also enable you to avoid storing secrets in templates.

But it's not all great. Declarative template formats like ARM tend to suffer from a number of weaknesses. The templates themselves are often very verbose, especially if you get a tool to auto-generate them, and if you prefer to hand-roll them, the documentation is often sparse, and its a cumbersome and error prone process. When I build ARM templates I usually start by copying one of the Azure Quickstart templates and adapting it to my needs. But often that requires me to also visit resources.azure.com to attempt to deduce what template setting is needed to enable a feature I only know how to turn on via the portal. It can be a painfully slow and error-prone process.

Another issue is that although YAML and JSON files are touted as being "human readable", the fact is that they quickly lose their readability once they go beyond a screen-full of text, as Pascal's example clearly demonstrated.

And there are some practical annoyances. For example, a while ago I deployed a resource group that used some secrets. I parameterized them in the template (as is the best practice), and so when I initially deployed the ARM template, I provided those secret values. But the trouble was, now every time I wanted to redeploy the template because of some other unrelated change, I needed to source those secret values again even though they weren't modified. There didn't seem to be an obvious way of asking it to simply leave those secrets with the values they had on a previous deployment.

And this brings me onto the final issue that you inevitably run into with these templates. They end up requiring their own pseudo-programming language. In ARM templates, there are often dependencies between items. I need the Storage Account to be created before the Function App, because the Function App has an App Setting pointing at the connection string for the Storage Account. In the case of a web app that talks to a database it might be even more complex, with the database needing the web app's IP address in order to set up firewall rules, while the web app needing the database's connection string, resulting in a circular dependency.

The ARM template syntax has the concept of 'variables' which can be calculated from parameters, and can be manipulated using various helper functions such as 'concat' and 'listkeys' as you can see in the following example:

    "name": "AzureWebJobsStorage",
    "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountId'),'2015-05-01-preview').key1)]"

And this seems to be an inevitable pattern in any declarative template format that attempts to define something moderately complex - you end up wanting regular programming constructs, such as conditional expressions, string manipulations, and loops. Here's a snippet from an API Management policy defined in XML I saw recently that you can see has also introduced a level of scripting.

<set-header name="X-User-Groups" exists-action="override">
        @(string.Join(";", (from item in context.User.Groups select item.Name)))

The frustration I have with these DSLs within templates is that they are very limiting, lack support for intellisense and syntax highlighting, and tend to make our templates more indecipherable and fragile. Escaping values correctly can become a real headache as you can find yourself encoding JSON strings within JSON strings.


So why not just write our deployment scripts in a regular scripting or programming language? There are some obvious benefits. The language already has familiar syntax, supporting conditional steps, storing and manipulating variables for later use, generating unique names according to a custom naming conventions, and much more. Our editors can help us with intellisense, syntax highlighting and refactoring shortcuts.

Also, we can follow the principles of "clean code" and extract blocks of logic into reusable methods. So I might make a methods that knows how to create an Azure Function App configured just the way I like it, with specific features enabled, and specific resource tags that I always apply. This allows the top-level deployment script/code to read very naturally whilst hiding the less intersting or repetitive details at a lower level.

For example, the fluent Azure C# SDK syntax gives an idea of what this could look like. Here's creating a web app:

var app1 = azure.WebApps

And you could easily build upon this approach by defining your own custom extension methods.

Just like ARM templates, imperative deployment scripts can easily be parameterized, ensuring you keep secrets out of source control, and can reuse the same script for deploying to different environments.

But imperative deployment scripts like this do potentially have some serious drawbacks. The first is: what about idempotency? If I run my script twice, will it fail the second time because things are already there? Can it work out what's missing and only create that? Well, we don't want to bloat our script to have to put lots of conditional logic in, checking if a resource exists and only creating it if it is missing, but it turns out that it's not all that hard to achieve. In fact, Pascal Naber recently posted a gist showing an idempotent bash script using the Azure CLI to deploy a Function App configured to access Key Vault. You can safely run it multiple times.

For example if I run the following Azure CLI commands multiple times, I won't get any errors:

az group create -n "IdempotentTest" -l "west europe"
az appservice plan create -n "IdempotentTest" -g "IdempotentTest" --sku B1

But what about the desired state capabilities of a declarative framework like ARM templates? What if we wanted a Standard rather than Basic tier app service plan? Let's try:

az appservice plan create -n "IdempotentTest" -g "IdempotentTest" --sku S1

And this works - our app service plan gets upgraded to the standard tier! Let's make it harder. What if we decide it should be a Linux app service plan:

az appservice plan create -n "IdempotentTest" -g "IdempotentTest" `
                --sku S1 --is-linux

And now we get an error - "You cannot change the OS hosting your app at this time. Please recreate your app with the desired OS." Although, to be fair, I'm not sure an ARM template deployment would fare any better attempting to make this change. Not all modifications to desired state can be straightforwardly implemented.

To be honest, I was a little surprised by this. I hadn't realised the Azure CLI had this capability, and it makes it a much more competitive alternative to ARM templates. I haven't tried the same thing with the Azure for .NET SDK - that would be in interesting experiment for the future.

This leaves me thinking that ARM templates actually offer very few tangible benefits over using a scripting approach with Azure CLI. Perhaps one weakness of the scripting approach is that idempotency certainly is not automatic. You'd have to think very carefully about what the conditional steps and other logic in your scripts were doing. For example, if you generate a random suffix for a resource name like I do in many of my PowerShell scripts, then straight off you've not got idempotency - you'd need custom code to check if the resource already exists and find out what random suffix you used last time.

But it's interesting that we are starting to see this approach to infrastructure as code gaining momentum elsewhere. I've not had a chance to play with Pulumi yet, but it seems to be taking a very similar philosophy - define your infrastructure in JavaScript, taking advantage of the expressiveness, familiarity, reusability and abstractions that a regular programming language can offer.

The Verdict

There are good reasons why ARM templates are still the recommended way to deploy resources to Azure. They help you avoid a lot of pitfalls, and still have a few benefits that are hard to replicate with a scripting or regular programming language. But they come at a cost of complexity and are generally unfriendly for developers to understand and tweak. It feels to me like we're not too far away from code-based approaches being able to offer the same benefits but with a much simpler and more developer-friendly syntax. The Azure CLI already seems very close so long as you take a sensible approach to what additional actions your script performs.

Maybe what's needed is simply a much easier way to generate the templates in the first place - if I can write a very simple script that produces an ARM template, then I don't need to worry about how verbose the resulting template is. It seems to me that's what the Service Fabric Mesh team decided by choosing to create a YAML resource definition that gets compiled into ARM. (Although I'm sure that before long that YAML will start adding DSL like constructs for things like string manipulation).

Anyway, thanks for sticking with this rather long and rambling post. I'm sure there's a lot more that could said on the strengths and weaknesses of both approaches, so I welcome your feedback in the comments!

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments Posted in:

Back in Feb 2017 I wrote about how you can deploy an Azure Web App by zipping it up and pushing it to App Service with the Kudu REST API. But later that year, a much better new "zip deploy API" was announced, and I wrote another article explaining how to use that. However, more recently, an even newer approach, known as "run from package" has been announced, and is arguably now the best way to deploy your web apps and function apps.

So in this post, I'll show some examples of using "Run from Package" to deploy a simple website. As usual, I'll be using the Azure CLI, from PowerShell.

The way "Run from Package" works is that you simply set up a special App Setting called WEBSITE_RUN_FROM_PACKAGE and its value tells App Service where to find the zip containing your application. There are actually two options available to us. The zip file can be stored at any publicly available URI, so you can just point at a zip file in Azure Blob Storage. Or you can just upload the zip file directly to App Service and update a text file that points at it. We'll see both options in action.

Step 1 - Create an empty web app

We'll start off by creating a resource group, an app service plan and then putting an empty web app in.

$location = "West Europe"
$resGroupName = "RunFromPackageDemo"
az group create -n $resGroupName -l $location

$appServicePlanName = "RunFromPackageDemo"
az appservice plan create -n $appServicePlanName -g $resGroupName --sku B1

$webAppName = "runfrompackagedemo1"
az webapp create -n $webAppName -g $resGroupName --plan $appServicePlanName

It is at this point that we could have used the existing zip deploy API to upload a zip of our application directly to this web app. Behind the scenes, the API would unzip the contents of the uploaded zip into the wwwroot folder. It's very easy to automate this with the Azure CLI:

az webapp deployment source config-zip -n $webAppName `
                -g $resGroupName --src myApp.zip

But let's not do that for now. Instead, we'll see how to use Run from Package.

Step 2 - Upload the zip to blob storage and generate a SAS token

If we are opting for the approach where our WEBSITE_RUN_FROM_PACKAGE points at URI, we need somewhere to store it, and an Azure blob storage container is a good choice. The recommendation is to use a private container, and generate a SAS token to secure access to the zip.

Here's how we could use the Azure CLI to automate creating a new storage account with a private container to store our zip files:

$storageAccountName = "runfrompackagedemo1"
az storage account create -n $storageAccountName -g $resGroupName `
    --sku "Standard_LRS"

# get the connection string and save it as an environment variable
$connectionString = az storage account show-connection-string `
                    -n $storageAccountName -g $resGroupName `
                    --query "connectionString" -o tsv

$containerName = "assets"
az storage container create -n $containerName --public-access off

And let's make a really simple example website to deploy with an index.html called version1.zip

Write-Output "<h1>Version 1</h1>" > "index.html"
$zipName = "version1.zip"
Compress-Archive -Path "index.html" -DestinationPath $zipName

And finally let's again use the Azure CLI to upload version1.zip to blob storage and generate a SAS URL for it. I'm giving mine a five year lifetime in this example. It would appear that the URL needs to be valid for as long as you want the site to work, so you should bear that in mind if you choose this technique. Remember that the SAS will be invalidated if you cycle the keys for your storage account. Normally I consider long-lived SAS tokens to be an anti-pattern, but in this case I'm less concerned since application binaries rarely contain very sensitive information.

# upload the zip to blob storage
$blobName = "version1.zip"
az storage blob upload -c $containerName -f $zipName -n $blobName

# generate a read-only SAS token that expires in 5 years
$expiry = (Get-Date).ToUniversalTime().AddYears(5).ToString("yyyy-MM-dd\THH\:mm\Z")
$sas = az storage blob generate-sas -c $containerName -n $blobName `
    --permissions r -o tsv `
    --expiry $expiry

# construct a SAS URL out of the blob's URL plus the SAS token
$blobUrl = az storage blob url -c $containerName -n $blobName -o tsv
$sasUrl = "$($blobUrl)?$($sas)"

Step 3 - Point the Web App at the zip in blob storage

Setting the app setting ought to be straightforward. The setting name is WEBSITE_RUN_FROM_PACKAGE and the value is the SAS URL we just generated. But due to a nasty escaping issue using the Azure CLI (see this issue and this issue) we need an ugly fix. (note that this only applies to using the Azure CLI)

# escape the SAS URL to work 
$escapedUrl = $sasUrl.Replace("&","^^^&")

# set the app setting
az webapp config appsettings set -n $webAppName -g $resGroupName `
    --settings "WEBSITE_RUN_FROM_PACKAGE=$escapedUrl"

And that's it. With that one app setting, we've deployed our application. And if we visit our website, we'll see "Version 1".

If we were to repeat the process, creating a version2.zip file, uploading it to blob storage, generating a SAS URL, and updating the WEBSITE_RUN_FROM_PACKAGE application setting, then we'd very soon see the new version in place.

Why bother?

Now you might be thinking - why go to all this trouble? What was wrong with the previous zip deployment API? And of course, the existing zip API still works just fine and you can keep using it if it meets your needs. But there are some benefits to taking the "Run from Package" approach, which you can read about in more detail here, but I'll briefly summarise them:

  • Ability to rapidly switch back to a previous version without needing to re-upload anything. Your blob storage container functions a bit like a Docker container registry, containing versioned artefacts of your web applications.
  • A much more atomic switchover. Previously your new zip got unzipped over the top of the previous one, meaning that there was a small period during upgrade where your app was taken offline to avoid inconsistency. This approach does still do a site restart, but overall the whole upgrade is much faster.
  • Much faster cold start performance for Azure Functions running on the consumption plan, especially when the zip contains large number of files (e.g. a Node.js application)
  • The wwwroot folder is now write-only. This could be interpreted as a disadvantage as there are some applications that write into their own wwwroot folder - e.g. storing user data in App_Data, but this is no longer considered a good practice for scalable cloud applications, and so being denied this ability is a good thing, and improves predictability - you know exactly what code you're running.

What if I don't want to use blob storage?

Now not everyone will like the idea of needing to point the web app at a blob container, with the inherent possibility that at some point in the future the app could break because someone inadvertently deleted the storage account or cycled the keys.

And "Run from Package" offers a second alternative. With this model, you just set the WEBSITE_RUN_FROM_PACKAGE app setting to the value 1. So let's first use the Azure CLI to update our app setting to use this technique:

az webapp config appsettings set -n $webAppName -g $resGroupName `
    --settings "WEBSITE_RUN_FROM_PACKAGE=1"

Next you need to get your zip file into the D:\home\data\SitePackages folder of your web app and update a packagename.txt file in the same folder to hold the name of the zip file you want to be live. Uploading the zip and editing packagename.txt are both possible with the Kudu REST API, but there's an easier way. When WEBSITE_RUN_FROM_PACKAGE has the value 1, whenever you upload a zip file with the zip deployment API, instead of unzipping it's contents to wwwroot, it will instead save it into SitePackages and update packagename.txt for you.

Suppose we do two deployments of our application using this technique

az webapp deployment source config-zip -n $webAppName `
                    -g $resGroupName --src version2.zip
az webapp deployment source config-zip -n $webAppName `
                    -g $resGroupName --src version3.zip

We'll see that version3.zip is now live, but our SitePackages folder will actually contain both zip files, allowing us to easily switch back if we need to. If we use the Kudu debug console (accessible at https://mywebapp.scm.azurewebsites.net/DebugConsole) to explore what's in SitePackages, here's what we see:

 Volume in drive D is Windows
 Volume Serial Number is E859-323E

 Directory of D:\home\data\SitePackages

01/14/2019  02:49 PM    <DIR>          .
01/14/2019  02:49 PM    <DIR>          ..
01/14/2019  02:47 PM               157 20190114144716.zip
01/14/2019  02:49 PM               157 20190114144929.zip
01/14/2019  02:49 PM                18 packagename.txt
               3 File(s)            332 bytes
               2 Dir(s)  10,737,258,496 bytes free

D:\home\data\SitePackages>type packagename.txt

As you can see, the two uploaded zips have been named with timestamps, and packagename.txt has been updated for us. I like the simplicity of being able to just use the zip deployment API to automate this, but if you wanted to be able to automate rolling back to the previous version, there would be a bit more work involved (see my previous post for some tips on calling the Kudu REST APIs you'd need to use to automate this).


The new "Run from Package" deployment option offers several benefits over previous techniques for deploying Web Apps and Function Apps, and gives you the choice between two places to store your zip files. You can access my full PowerShell & Azure CLI demo script to try this out for yourself in this GitHub Gist. Although I only showed deployment of a very simple static website here, you can use exactly the same technique to deploy any Web App or Function App.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions