0 Comments

Thanks to a tweet from Bob Martin, I stumbled across a fascinating talk by Sarah Mei, entitled “Is Your Code Too SOLID”? In the talk she distinguishes between the concepts of “strategy” and “tactics”, saying that although the “SOLID principles” are a good “strategy” for making our codebases more maintainable (i.e. if our code is SOLID then it is easy to change), they don’t provide us concrete “tactics” for how to actually implement that strategy.

In other words, what practical advice can we give developers to enable them to write SOLID code, or (more to the point) move existing code in the right direction?

In response to this question, Sarah offers a offers an acronym of her own “STABLE” – giving six practical tactics for helping developers implement the strategy.

I must confess that I tend to have low expectations of acronyms. They tend to suffer from awkwardly named points, redundancy or key omissions. But as she worked through each of the six points, I was really impressed with how well it hangs together, and to be honest, I was left wondering why this acronym hasn’t caught on (at least in the circles I move in). It certainly deserves to be more widely known as it provides a very practical and concrete set of talking points for teams looking for ways to improve their codebase.

So let’s look at each of the six points, and I’ll give my own take on them.

S = Smell your code

The first tactic is simply to learn to identify “code smells” in your code. Your team needs to be able to identify what’s wrong with a class or method, and have a common vocabulary for expressing it. This might require a regular “lunch and learn” session where different code smells are discussed, with explanations of why such code causes problems.

What I love about this tactic is how modest in its goals it is. It doesn’t ask us to fix anything (yet), it simply asks us to learn to see problems. If a developer gets a better sense of (code) smell, not only are they able to spot problems with existing code, but will also hopefully stop in their tracks when they realise they are introducing the same smells into new code they write.

T = Tiny problems first

The second tactic is to tackle “tiny” problems first. Once you’ve identified a bunch of code smells, some of them may require wholesale wide-ranging rework of multiple classes to resolve. Whilst there is a time and a place for doing that, it can often resulting in inadvertently breaking working code, and give a bad name to any attempts at “refactoring” the code in the future.

The “tiny problems first” tactic encourages us to start with the simplest changes that move the codebase in the right direction. That might just be giving a variable a meaningful name, or extracting a block of code into a well-named function. Again, I love how modest this tactic is: it gives everydeveloper, no matter how junior they are, a realistic path towards improving the quality of the overall codebase.

Obviously at some point we will need to address some of the more deep-rooted problems in our system. Sarah points out that you can “see the large problems better by clearing away the small problems obscuring them”.  But there’s usually something else that needs to be done before we can tackle the larger problems, and that’s where the third tactic comes in …

A = Augment your tests

The idea behind “refactoring” is that you improve the structure or design of existing code without modifying its behaviour. A good suite of unit tests gives you the freedom to do this with confidence, knowing that if all the tests pass after the refactoring, you’ve not broken anything

The reality in many software projects is unfortunately a long way from this ideal. If your automated tests (unit or integration) only cover a small portion of the functionality, then any kind of restructuring of the code is inherently risky. It means you need to perform costly manual testing every time you change anything.

So this tactic is about adding in additional tests that will provide a safety net for the changes you need to perform. Sarah suggests focusing on adding integration tests at one level higher than the class you’re working on. She says “test behaviour, not implementation. That’s why you go one level up. You need tests that describe the behaviour you want to keep”. Often in a legacy codebase there are far too many fragile tests that are tightly coupled to implementation details.

Again, I like this tactic because it is realistic and achievable – we all ought to be able to find time to add at least one test to the code we’re working on, and if we can keep the focus of those tests on behaviour rather than implementation, they will provide a fantastic safety net for us to address the larger problems in our code.

B = Back up (when it’s useful)

This tactic states that “when the code has an abstraction in it that is no longer serving you well, sometimes the most useful thing to do is to ‘rewind’ the code into a more procedural state, put all the duplication back, and start again”.

This is the boldest of the tactics so far, and may feel like a backwards step, but I think it’s very helpful to at least put this on the table as one of the options at our disposal. As Sarah points out we can easily get caught by the sunk cost fallacy. “Don’t forge ahead with a set of objects that don’t even fit now, let alone in the future”.

By clearing these poorly conceived abstractions from our codebase, we leave ourselves space to view the problem from a fresh perspective and come up with new abstractions that better fit the needs of our business requirements. Remember, we tend to make a lot of our architectural decisions at the start of a project, which is actually when we have the least understanding of what the system needs to do. So it shouldn’t surprise us if we took some wrong steps along the way, and we shouldn’t be afraid to say “we got this wrong, let’s undo it”.

L = Leave it better than you found it

The fifth tactic is often known as the “boy scout rule”, with the idea that you leave the campsite in a better state than you found it. Applied to code, it means that every method I work on, I’ll attempt to make minor improvements to, often by making small refactorings like renaming things.

Now, this tactic at first seemed to me to be a restatement of tactic two (“Tiny Problems First”). Perhaps like many acronyms, STABLE suffers from a bit of redundancy to create a contrived word out of the points.

But on reflection, I think there are two separate questions being answered here.

Tactic two asks “what order should I tackle problems in?”, and answers, “solve the tiny problems first”.

Tactic five asks “whenshould I tackle these problems?” and answers, “do it when you’re already working on that area of code”.

I often tell my teams that the best time to make improvements to a class or method is when you’re actively working on that code, maybe fixing a bug or adding a new feature. You’ve probably spent a lot of time reading and understanding the code. You have a good grasp on how it currently works and what it does. You probably also have some opinions on how the code could be improved (in other words you’ve already done tactic 1 – you’ve smelt the code and not appreciated its fragrance).

The temptation at this point is to simply write an email having a “rant” about how bad this code is, and say that “we should plan to rewrite it in the future”. Now of course you likely don’t have time to fix everything, but even a small investment of additional time before you move onto the next thing would allow you to fix some tiny problems (tactic 2), augment the tests (tactic 3) and leave the code better than you found it (tactic 5).

E = Expect good reasons

The video leaves us on a bit of a cliff-hanger here. The audio breaks off at this point, and so although we can see point 5, we are left guessing what “expect good reasons” might mean! Thankfully, Sarah’s slides are available on slidedeck, and contains a full transcript.

Tactic six asks us to “assume past developers had good reasons to write the code they did”. This complements tactic one (just as 2 and 5 do, giving these tactics a neat chiastic structure). Often when we smell problems in the existing code, our initial reaction is to criticise the original developer. “What incompetence!” “What were they thinking?“

But as I argued in my technical debt course on Pluralsight, “the blame game” is counter-productive and can result in a toxic atmosphere. As a team we should be taking collective responsibility for the quality of our code and focusing on how we can move in the right direction, rather than recriminating about how we got in this mess.

We need to start from the assumption that all the developers on the team are genuinely trying their best, and if the code they produce is falling short, it highlights the need for training, code reviews, mentoring and pair programming to help the team move towards a shared understanding of the sort of code we want to write going forwards.

No need to stop the world

Sarah finishes her talk by pointing out that these tactics allow you to make real progress over time without having to perform a “stop the world” refactoring, where feature development has to stop in order to sort out the mess in the code. This is important, as the business can very quickly lose patience with being asked to put feature development on hold so you can repay “technical debt”.

So thank you Sarah for this insightful talk and very helpful set of tactics. I’m actually planning to present an updated version of my “technical debt” talk to some user groups over the next few months (let me know if you’re in the South of England and would like me to visit your group). In the talk I place a strong emphasis on “practical techniques for repaying technical debt”, and the STABLE tactics provide a fresh perspective which I look forward to sharing both with user groups and the developers I work with.

0 Comments

Microsoft recently announced a new zip API to allow publishing of Azure web apps, functions and webjobs. This may seem overkill given the huge variety of existing deployment options (Local Git, GitHub, MSDeploy, FTP, Kudu Rest API, VSTS, BitBucket, DropBox, Visual Studio,  …).

However, many of the existing options assume you want to use Kudu’s CI build system. This is great for small demo apps, but in many commercial scenarios you may prefer to build the code yourself and just publish the compiled assets.

The new zip API simplifies things by letting you simply upload a zip file containing everything that should end up in the site\wwwroot folder.

I thought I’d give it a try and wrote a PowerShell script to create a web app (using the Azure CLI of course!) and then push a zip file to it.

First lets create an app service plan and web app:

# variables
$resourceGroup = "ZipDeployTest"
$location = "westeurope"
$appName = "zipdeploytest1"
$planName = "zipdeploytestplan"

# create resource group
az group create -n $resourceGroup -l $location

# create the app service plan
az appservice plan create -n $planName -g $resourceGroup -l $location --sku B1

# create the webapp
az webapp create -n $appName -g $resourceGroup --plan $planName

Now we need to get the user name and password for deploying. We can use Azure CLI queries to get those:

# get the deployment credentials
$user = az webapp deployment list-publishing-profiles -n $appName -g $resourceGroup `
    --query "[?publishMethod=='MSDeploy'].userName" -o tsv

$pass = az webapp deployment list-publishing-profiles -n $appName -g $resourceGroup `
    --query "[?publishMethod=='MSDeploy'].userPWD" -o tsv

And now its just a case of calling the api/zipDeploy endpoint for our site, with the credentials as a basic auth header (a little bit fiddly to set up in PowerShell) and the zip file passed as an –InFile:

# basic auth with Invoke-WebRequest: https://stackoverflow.com/a/27951845/7532
$pair = "$($user):$($pass)"
$encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair))
$basicAuthValue = "Basic $encodedCreds"

$Headers = @{
    Authorization = $basicAuthValue
}

$sourceFilePath = "publish.zip" # this is what you want to go into wwwroot

# use kudu deploy from zip file
Invoke-WebRequest -Uri https://$appName.scm.azurewebsites.net/api/zipdeploy -Headers $Headers `
    -InFile $sourceFilePath -ContentType "multipart/form-data" -Method Post

And that’s all there is to it. You can use this technique with web apps, function apps and webjobs.

One thing you might be wondering is – what happens to anything already in my wwwroot folder? Suppose my app saves some files in the app_data folder? Will they get deleted?

The answer is that zip deploy will delete everything that was uploaded by a previous zip deploy, but leave any files that got there by any other means. This is a nice approach that keeps any app created files safe, but allows dead application code to get deleted.

As an example of creating your zip package, here’s a PowerShell script I created that prepares a publish.zip file containing three .NET core azure webjobs. As you can see I’m using dotnet publish to publish the code, and then putting it into the app_data\jobs subfolder as is required by WebJobs as well as copying in a few other files into their appropriate places. The full GitHub project with PowerShell scripts to create the publish.zip and deploy it with the zip API is available here.

$publishFolder = "publish"

# delete any previous publish
if(Test-path $publishFolder) {Remove-Item -Recurse -Force $publishFolder}

# publish the webjobs
dotnet publish triggered1 -c Release -o ..\$publishFolder\app_data\jobs\triggered\triggered1
Copy-Item triggered1\run.cmd publish\app_data\jobs\triggered\triggered1

dotnet publish scheduled1 -c Release -o ..\$publishFolder\app_data\jobs\triggered\scheduled1
Copy-Item scheduled1\run.cmd publish\app_data\jobs\triggered\scheduled1
Copy-Item scheduled1\settings.job publish\app_data\jobs\triggered\scheduled1

dotnet publish continuous1 -c Release -o ..\$publishFolder\app_data\jobs\continuous\continuous1
Copy-Item continuous1\run.cmd publish\app_data\jobs\continuous\continuous1
Copy-Item continuous1\settings.job publish\app_data\jobs\continuous\continuous1

# zip the publish folder
$destination = "publish.zip"
if(Test-path $destination) {Remove-item $destination}
Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::CreateFromDirectory($publishFolder, $destination)
Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

image I'm really pleased to announce that my latest Pluralsight course, Azure CLI: Getting Started, is now live. The subject will be no surprise to regular followers of my blog as I’ve been publishing a series of Azure CLI tutorials for the last two months.

For those who’ve not tried it yet, the Azure CLI is a powerful, cross-platform, command line utility for managing your Azure resources. You can use it to create Virtual Machines, deploy ARM templates, set up your web apps to synchronize from GitHub, open firewall ports on your SQL Server, create Active Directory applications and service principals and much more. And I show how to do all that in the course.

Because it’s open source, it’s moving fast and picking up lots of new features. There are still a few gaps (like ServiceBus at the time of writing), but what’s impressive is the fact that all new Azure services are launching with CLI support from day one. So if you want to try out the brand new Event Grid or managed Kubernetes service, the CLI commands are already available to get you up and running really quickly.

Perhaps you’re wondering, “why would I bother with the Azure CLI if I’m on Windows?” Well, in one sense, pretty much everything it can do can also be accomplished with PowerShell, so if you’re a big PowerShell fan, there’s nothing forcing you to use the CLI. But one thing I really love about the CLI is its ease of use and discoverability.

With the –h flag, I can drill down into commands and discover all the capabilities. az –h tells me the major subgroups. az webapp –h will show me the subcommands and subgroups for webapps. az webapp create –h will help me understand what arguments and options I need to specify to create a new webapp. So I find quite often I will use the Azure CLI in preference to the Azure PowerShell commandlets even when I’m working inside PowerShell (which is actually my preferred shell).

The Azure CLI obviously also has the advantage of being able to run in your shell on MacOS or Linux, and you should also make sure you take a bit of time to explore the power of it’s --query argument – a really cool feature.

So, if you’re a Pluralsight subscriber, why not bookmark my new Azure CLI: Getting Started course and learn how to streamline and automate the management of your own Azure resources. And if you’re not, why not get yourself signed up for a free trial?

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

So far in my tutorial series on the Azure CLI, I’ve been assuming that you are logging in directly with the az login command. This will direct you to visit a webpage, enter a code and log in to Azure yourself. Now that’s fine if you’re running commands or scripts directly from a command prompt yourself, but what if you want to automate a task? Say you want to set up a scheduled task to shut down a Virtual Machine at midnight, or want to automate a resource management task from an Azure Function.

What you need is a service principal. A service principal is an identity your application can use to log in and access Azure resources. In this post I’ll show you how we can create a service principal from the CLI which can be used not only to run CLI commands from an automated process, but to use the Azure SDK for your programming language of choice (e.g. C#, Python, Java, Ruby, Node.js etc).

Create an Azure Active Directory application

Before you create a service principal, you need to create an “application” in Azure Active Directory. You can think of this as an identity for the application that needs access to your Azure resources.

You can create an AD Application with the Azure CLI, but do make sure you’ve selected the right subscription with az account set first, so that the application ends up in the correct Active Directory.

Here we select the subscription, and then use az ad app create to create an application. The only parameter that really matters is --display-name, but we are required to provide a --homepage and --identifier-uris parameter, so we just make up suitable values for them (they don’t have to be reachable URIs).

# select correct subscription
az account set -s "my subscription name"

# a name for our azure ad app
appName="ServicePrincipalDemo1"

# create an Azure AD app
az ad app create \
    --display-name $appName \
    --homepage "http://localhost/$appName" \
    --identifier-uris http://localhost/$appName

Now we need the app id from the output of az ad app create. You can get it again by searching for the AD app with the display name you chose like this:

# get the app id
appId=$(az ad app list --display-name $appName --query [].appId -o tsv)

Create the Service Principal

Now that we have an AD application, we can create our service principal with az ad sp create-for-rbac (RBAC stands for role based access control).

We need to supply an application id and password, so we could create it like this:

# choose a password for our service principal
spPassword="[email protected]!"

# create a service principal
az ad sp create-for-rbac --name $appId --password $spPassword 

This would create a service principal that has contributor access to the currently selected subscription.

However, its a good idea to restrict permission to only allow access to the minimal set of resources that the target application needs to use. So you can set the --role to reader instead of contributor if you only need read access. Or you can use the --scope argument to limit the scope to only allow management of a single resource group.

So here’s how we could create a service principal that has contributor level access to just a single resource group (the resource group should already exist).

spPassword="MyServicePrincipal1!"
subscriptionId=$(az account show --query id -o tsv)
resourceGroup="MyResourceGroup" # must exist

az ad sp create-for-rbac --name $appId --password $spPassword \
                --role contributor \
                --scopes /subscriptions/$subscriptionId/resourceGroups/$resourceGroup

Get the Service Principal App Id

Once you've created your service principal, you will need to get its app id (not to be confused with the app id of the AD application). You can get this from the output of the az ad sp create-for-rbac command, or you can get hold of it again by searching for service principals whose display name is the app id of the AD application like this:

# get the app id of the service principal
servicePrincipalAppId=$(az ad sp list --display-name $appId --query "[].appId" -o tsv)

Configuring Access

If you need to do anything more complex with the roles and scopes for your service principal, then the az role assignment group of commands will help you do this. If you created your service principal without specifying a role and scope, here’s how to delete the existing one, and add a new one:

# view the default role assignment (it will be contributor access to the whole subscription)
az role assignment list --assignee $servicePrincipalAppId

# get the id of that default assignment
roleId=$(az role assignment list --assignee $servicePrincipalAppId --query "[].id" -o tsv)

# delete that role assignment
az role assignment delete --ids $roleId

# get our subscriptionId
subscriptionId=$(az account show --query id -o tsv)

# the resource group we will allow ourselves to access (must exist)
resourceGroup="MyResourceGroup"

# grant contributor access just to this resource group only
az role assignment create --assignee $servicePrincipalAppId \
        --role "contributor" \
        --scope "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup"

# n.b to see this assignment in the output of az role assignment list, you neeed the --all flag:
az role assignment list --assignee $servicePrincipalAppId --all

Testing it Out

So we’ve created a service principal, but does it work? Well, before we can test it out, we need one more bit of information: we need the tenant id associated with our account. We can get that like this:

# get the tenant id
tenantId=$(az account show --query tenantId -o tsv)

And now let’s log out with az logout, and then log back in using our service principal, by using az login --service-principal and passing in the app id of the service principal, the password we chose, and the tenant id:

# now let's logout
az logout

# and log back in with the service principal
az login --service-principal -u $servicePrincipalAppId \
         --password $spPassword --tenant $tenantId

The proof that this all worked is in what we can do. If I ask to list all resource groups, I should just see the one I’m allowed access to:

# what groups can we see? should be just one:
az group list -o table

And if I try to create a new resource group, that will fail because I don’t have permission

# can we create a new resource group? should be denied:
az group create -n NotAllowed -l westeurope

However, if I want to create a new VM inside the one resource group I have been granted contributor access to, that will be allowed:

# but we should be able to create a VM:
vmName="ExampleVm"
adminPassword="[email protected]@ssw0rd!"

az vm create \
    --resource-group $resourceGroup \
    --name $vmName \
    --image win2016datacenter \
    --admin-username azureuser \
    --admin-password $adminPassword \
    --size Basic_A1 \
    --use-unmanaged-disk \
    --storage-sku Standard_LRS

# check its running state
az vm show -d -g $resourceGroup -n $vmName --query "powerState" -o tsv

Using the Service Principal from C#

The service principal can be used for more than just logging into the Azure CLI. It can be used alongside the Azure SDK for .NET (or indeed with the SDK for your favourite language).

For example, here’s the code for a simple Azure Function that runs on a schedule at midnight every night. It uses the service principal login details (read from app settings), then attempts to find a Virtual Machine with a specific name in a specific resource group (obviously this should be the resource group we granted contributor access to). And then if the machine is not in a stopped deallocated state, it attempts to put it into that state in order to save money.

#r "System.Configuration"
#r "System.Security"

using System;
using System.Configuration;
using Microsoft.Azure.Management.AppService.Fluent;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Authentication;
using Microsoft.Azure.Management.ResourceManager.Fluent;

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"VM Shutdown function executed at : {DateTime.Now}");
    var sp = new ServicePrincipalLoginInformation();
    sp.ClientId = ConfigurationManager.AppSettings["SERVICE_PRINCIPAL"];
    sp.ClientSecret = ConfigurationManager.AppSettings["SERVICE_PRINCIPAL_SECRET"];
    var tenantId = ConfigurationManager.AppSettings["TENANT_ID"];
    var resourceGroupName = ConfigurationManager.AppSettings["VM_RESOURCE_GROUP"];
    var virtualMachineName = ConfigurationManager.AppSettings["VM_NAME"];
    
    var creds = new AzureCredentials(sp, tenantId, AzureEnvironment.AzureGlobalCloud);
    IAzure azure = Azure.Authenticate(creds).WithDefaultSubscription();
    
    var vm = azure.VirtualMachines.GetByResourceGroup(resourceGroupName,virtualMachineName);
    log.Info($"{vm.Name} is {vm.PowerState}");
    
    if (vm.PowerState.Value != "PowerState/deallocated")
    {
        log.Info("Shutting it down:");
        vm.Deallocate();
        log.Info("Done");
    }
}

Learning More

Hopefully that’s enough to get you started creating and using your own service principals. For more details on different ways to create a service principal, check out this tutorial on the official docs site.

For previous entries in my Azure CLI tutorial series:

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

So far in my tutorial series on the Azure CLI, I’ve shown you how easy it is to automate the creation of all kinds of Azure resources such as Virtual Machines, Web Apps and SQL Databases, Storage Accounts and Blobs, and Queues.

And so you might think that I would recommend you write a script to automate the deployment of your applications with a series of Azure CLI commands.

But whilst that certainly is possible, if you have a cloud based application that you are going to regularly deploy (e.g. you’re developing an app and want continuous deployment set up), then it’s a good idea to put the effort into creating an ARM template, and using that as your deployment mechanism.

Benefits of using Azure Resource Manager templates

ARM templates offer several benefits over writing your own resource creation scripts.

They’re declarative, rather than imperative, keeping the focus on what should be in a resource group instead of what steps need to be taken to create everything in the group.

They’re parameterizable, allowing you to easily deploy different variants of your resource group. For example, you could deploy it once to North Europe and once to West Europe. Or you could use a premium tier database for production and a basic tier database for testing.

They support more efficient deployment, as the Azure Resource Manager can intelligently identify resources that could get created in parallel.

And they support incremental deployment, making it easy to add a new resource to an existing group simply by changing your template and redeploying it.

So let’s see how we can deploy ARM templates with the Azure CLI

Generating an ARM Template

The Azure CLI has a command that can take any existing resource group and generate an ARM template to represent it. The command is simply az group export, passing in the name of the resource group.

az group export -n MyResourceGroup

If you try this though, you may be in for disappointment. Currently (in version 2.20), the Azure CLI has a bug that means that if there are any warnings in the template then the CLI will report them as errors and not emit a template at all.

Hopefully this issue will be fixed soon, but don’t worry, there’s an easy enough workaround. Simply navigate to the resource group in the Azure portal and select the “Automation Script” option, which will generate the same thing.

The template it generates will try to be helpful by parameterizing the names of various resources, but often you’ll find that the template is overly verbose and doesn’t parameterize the the bits you might want it to, such as the pricing tier of an app service plan for example.

So I tend to view this exported template simply as a helpful guide, and also make use of the Visual Studio tooling to create an Azure Resource Group project, which makes it easier to navigate the template, and add new resources to it.

I also absolutely love the Azure Quickstart Templates GitHub repository, which contains hundreds of sample ARM templates showing you how to achieve all kinds of common tasks.

So between the output of az group export, the examples at GitHub and the help of Visual Studio, you ought to be able to create an ARM template that describes exactly how you want your resource group set up, and has parameters that let you change the things that you want to vary between deployments.

Deploying an Online Template

A great way to get started is to actually try to deploy one of the templates in the Azure Quickstart Templates repository. For this demo, I’ve picked out the WordPress on Docker example, which deploys a single Ubuntu VM, installs Docker on it and then runs a MySQL and WordPress container, giving you a very simple way to get WordPress up and running.

To deploy it we first need to create a resource group (with az group create), and then we use az group deployment create, passing in the target resource group and URI of the template. We can also give our deployment a name (I’ve chosen “TestDeployment”), and customize the parameters.

# create a resource group
resourceGroup="armtest"
location="westeurope"
az group create -l $location -n $resourceGroup

# the template we will deploy
templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/docker-wordpress-mysql/azuredeploy.json"

# deploy, specifying all template parameters directly
az group deployment create \
    --name TestDeployment \
    --resource-group $resourceGroup \
    --template-uri $templateUri \
    --parameters 'newStorageAccountName=myvhds96' \
                 '[email protected]!' \
                 'adminUsername=mheath' \
                 '[email protected]!' \
                 'dnsNameForPublicIP=mypublicip72'

You’ll see that I’m passing multiple parameters with the --parameters argument. You have to supply values for all template parameters that don’t have a default specified. This template requires five arguments, and two of them will form part of a domain name, so you will need to provide unique values across Azure for this deployment to succeed.

Once the deployment has completed we can explore what’s in the resource group and find the domain name to visit to try out our WordPress site with the following commands:

# see what's in the group we just created
az resource list -g $resourceGroup -o table

# find out the domain name we can access this from
az network public-ip list -g $resourceGroup --query "[0].dnsSettings.fqdn" -o tsv

By the way, if you try this and find that the site isn’t working, it may be that the WordPress Docker container stopped because the MySql Docker container failed to start quickly enough. That’s easily fixed by SSHing into the VM, finding out the id of the WordPress container, and restarting it if it has exited. Here’s a few commands to get you started with that:

# if it doesn't work, SSH in:
ssh [email protected]

# see if the wordpress container has exited
docker ps --all

# restart the wordpress container (replace e03 with the id of the exited container)
docker start e03

Deploying a Local Template

As well as deploying from an online template, you can provide the az group deployment create command the path of a local template file.

In this example I have a local ARM template called MySite.json and a local parameter file called MySite.parameters.json. I can point at the template with the --template-file argument and at the parameters file using the special @ prefix on the filename. Notice I can still proved additional parameters overrides if I want.

# create a resource group for this deployment
resourceGroup="templatedeploytest"
az group create -n $resourceGroup -l westeurope

# perform the initial deployment
deploymentName="MyDeployment"
sqlPassword='[email protected]'
az group deployment create -g $resourceGroup -n $deploymentName \
        --template-file MySite.json \
        --parameters @MySite.parameters.json \
        --parameters "administratorLoginPassword=$sqlPassword"

Updating an existing deployment

Once this deployment completes, we can easily update it by deploying an updated version of our ARM template to the same resource group over the top of the existing deployment. It’s pretty straightforward but there is one key argument to pay attention to. If --mode is set to Incremental, the deployment will be allowed to create any missing resources but not to delete anything. If its set to Complete, then any resources in the target resource group that are not specified in the ARM template will get deleted. Obviously Complete mode is more dangerous, but has the benefit of not leaving old unused infrastructure around wasting money.

az group deployment create -g $resourceGroup -n $deploymentName \
        --template-file MySite.json \
        --parameters @MySite.parameters.json \
        --parameters "administratorLoginPassword=$sqlPassword" \
        --mode Complete

Summary

So don’t think of the Azure CLI as an alternative to using Azure Resource Manager Templates, but as a useful tool for deploying them. In my opinion, whenever you have an application you’re planning to deploy more than once, it’s worth the time investment to generate an ARM template for it.

For previous entries in my Azure CLI tutorial series:

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.