In this post I’ll continue my series on the Azure CLI with a look at how you can manage storage queues and messages. Previous posts in the series are here:

I’m going to assume that we’ve already created a storage account and put the connection string into an environment variable called AZURE_STORAGE_CONNECTION_STRING to save us passing the --connection-string argument to each command. Read my previous post for more details on how to do this.

Creating Queues

To check if a queue exists:

$queueName = "myqueue"
az storage queue exists -n $queueName

And to actually create it (safe to call if the queue already exists):

az storage queue create -n $queueName

Posting Messages to a Queue

Now normally you wouldn’t be actually posting messages to queues from the CLI, but it might be useful for testing purposes or to trigger some kind of maintenance task. Here’s how you can post a message to a queue:

az storage message put --content "Hello from CLI" -q $queueName

You can also supply a time to live duration with the --time-to-live argument which should be specified in seconds.

Retrieving Messages from a Queue

Receiving messages from the queue is a two-step process. First you use az storage message get to mark a message on the queue as being locked for a set period of time (the “visibility timeout”). You then process the message and call az storage message delete to delete the message from the queue. If you need more time, you can use az storage message update to increase the visibility timeout for your message. If you fail to call either delete or update before the visibility timeout expires, your message becomes visible again for someone else to receive.

Here’s how we can get a message from the queue with a two minute visibility timeout:

az storage message get -q $queueName --visibility-timeout 120

The output from the get command includes the id and pop receiptof the message. These are important as they need to be used in the call to delete (or update). Here’s how we can delete the message for a specific message:

az storage message delete --id "2a1d4311-c952-4199-94ac-801930da31c7" \
        --pop-receipt "AgAAAAMAAAAAAAAAuSuJ51RD0wE=" -q $queueName

Again, it’s unlikely you’d often need to write code to read messages from queues from the command line, but it might be useful for diagnostic purposes if you wanted to write a script to resubmit certain messages from a poison message queue for example.

Learning More

As usual the best place to learn more is the official docs – in particular the az storage queue and az storage message groups of commands.


I’ve been exploring the capabilities of the Azure CLI recently and today I’m going to look at working with blob storage. To catch up on previous instalments check out these articles:

Creating a Storage Account

The first thing we want to do is create a storage account. We need to choose a “sku” – whether we need geo-redundant storage or not. I’m just creating the cheaper LRS tier in this example. I’m also making a new resource group first to put the storage account in.


# create our resource group
az group create -n $resourceGroup -l $location

# create a storage account
az storage account create -n $storageAccount -g $resourceGroup -l $location --sku Standard_LRS

Next, we need to get the connection string, which is needed for all operations on blobs and containers:

$connectionString=az storage account show-connection-string -n $storageAccount -g $resourceGroup --query connectionString -o tsv

A convenient feature of the CLI is that you can set the connection string as an environment variable to save having to pass the --connection-string parameter to every subsequent command.

Here’s how we do that in PowerShell:


or if you’re in a bash shell:


Creating Containers

Now we have a storage account, we can create some containers. The --public-access flag allows us to set their privacy level. The default is off for a private container, or you can set it to blob for public access to blobs. There’s also a container level which also allows people to list the contents of the container.

I’ll create a public and a private container:

az storage container create -n "public" --public-access blob
az storage container create -n "private" --public-access off

Uploading files

Uploading a file into your container is easy with the az storage blob upload command. You simply specify the name of the file to upload, the container to upload it into, and the name of the blob.

Here’s uploading a file into the public container and getting the URL from which it can be accessed:

# create a demo file
echo "Hello World" > example.txt

$blobName = "folder/public.txt"

# upload the demo file to the public container
az storage blob upload -c "public" -f "example.txt" -n $blobName

# get the URL of the blob
az storage blob url -c "public" -n $blobName -o tsv

If we upload a file to the private container, we’ll need to also generate a SAS token in order to download it via a URL. We do that with az storage blob generate-sas, passing in an expiry date and the access permissions (in our case, we just need r for read access).

$blobName = "folder/private.txt"

# upload the demo file to a private container
az storage blob upload -c "private" -f "example.txt" -n $blobName

# get the blob URL
$url = az storage blob url -c "private" -n $blobName -o tsv

# generate a read-only SAS token
$sas = az storage blob generate-sas -c "private" -n $blobName `
    --permissions r -o tsv `
    --expiry 2017-10-15T17:00Z

# launch a browser to access the file
Start-Process "$($url)?$($sas)"

More Blob Operations

Of course, there’s much more you can do with blobs from the Azure CLI, and you can explore the full range of options with az storage blob –h. You’ll see that we can easily download or delete blobs, snapshot them, as well as manage their metadata or even work with leases.

Of course, for ad-hoc storage tasks, Azure Storage Explorer is still a great tool, but if as part of a deployment or maintenance task you need to upload or download blobs from containers, the CLI is a great way to automate that process and ensure it is reliable and repeatable.


I’ve been really impressed with the Azure CLI, and have been using it to automate all kinds of things recently. Here’s some instructions on how you can create and configure an Azure Virtual Machine using the CLI.

1. Pick an image and size

If you’re going to create a Virtual Machine, you need to do so from a base image. Azure has hundreds to choose from, and so you can use the az vm image list command with the --all flag specified in order to find a suitable one.

For example if I want to find all VM images with elasticsearch in the “offer” name I can use:

az vm image list --all -f elasticsearch -o table

Or if I know I want to use the VS-2017 “sku” for a Visual Studio 2017 VM I can use:

az vm image list -s VS-2017 --all -o table

You’ll also want to decide what VM size you want. There are loads available, but not necessarily all in in every region, so you can check what sizes are available in your location with the following command:

az vm list-sizes --location westeurope -o table

2. Create the VM

Resources which share a common lifetime should be in the same resource group. It makes sense to create your VM in its own resource group, so that when you’re done you can clear it up by deleting the resource group. So let’s create a resource group, using a variable containing its name for convenience in future commands:

az group create --name $ResourceGroupName --location westeurope

Now we’re ready to create our VM in this resource group. There are loads of parameters to az vm create, which you can explore with the az vm create -h command. The good news is that lots of sensible defaults are picked for you so you don’t have to provide values for everything. However, it will choose for you to have a reasonably powerful VM with a managed disk by default, so if you want to keep costs down you might want to supply some money saving parameters like I show below.

In my example I’m using the Windows 2016 data center VM image and supplying my own username and password. I’m going for a smaller VM size and the cheaper option of using unmanaged disks.

AdminPassword="[email protected]@ssw0rd!"

az vm create \
    --resource-group $ResourceGroupName \
    --name $VmName \
    --image win2016datacenter \
    --admin-username azureuser \
    --admin-password $AdminPassword \
    --size Basic_A1 \
    --use-unmanaged-disk \
    --storage-sku Standard_LRS

This will take a few minutes to complete, and it’s created more than just a VM. There’s a network interface, a network security group, a public IP address and a disk (or storage account for the VHD if you chose unmanaged disk).

You can see everything that got created with:

az resource list -g $ResourceGroupName -o table

3. Configure the VM

For this demo, I’m going to show how we can configure the VM as a simple web server. First we need to ensure port 80 is open. We can do that easily with:

az vm open-port --port 80 --resource-group $ResourceGroupName --name $VmName

The next step is to install IIS and set up our website. Here’s a simple example of a PowerShell script I might want to run on the VM in order to get the website set up. It installs the IIS windows feature, then downloads a simple webpage from a public URL (but this could easily be securely downloading a zip with a Shared Access signature), then deletes the default website and creates a new website pointing at our custom web page.

Install-WindowsFeature -Name Web-Server
$sitePath = "c:\example-site"
$output = "$sitePath\index.html"
$url = "https://mystorage.blob.core.windows.net/public/index.html"
New-Item -ItemType Directory $sitePath -Force
Invoke-WebRequest -Uri $url -OutFile $output
Remove-Website -Name 'Default Web Site'; `
New-Website -Name 'example-site' `
         -Port 80 -PhysicalPath $sitePath

But how can we get this script to run on our virtual machine? Well, we can use the Custom Script Extension. This allows us to either provide a simple command to run, or if its more complex like this, a URI for a script to download and then run.

Here’s how we can invoke the custom script on our VM:

az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $VmName \
--resource-group $ResourceGroupName \
--settings '{"fileUris":["https://my-assets.blob.core.windows.net/public/SetupSimpleSite.ps1"],"commandToExecute":"powershell.exe -ExecutionPolicy Unrestricted -file SetupSimpleSite.ps1"}'

By the way, if you’re doing this from a PowerShell prompt instead of a bash prompt, getting the quotes escaped correctly in the settings parameter can be a real pain. I find its easier to just pass the path of a file containing the JSON like this

az vm extension set `
    --publisher Microsoft.Compute `
    --version 1.8 `
    --name CustomScriptExtension `
    --vm-name $VmName `
    --resource-group $ResourceGroupName `
    --settings extensionSettings.json

This will take a minute or two (and will be a bit slower if you chose the cheap options like I did), but assuming it completes successfully we now have our fully configured VM. To check it worked, we can visit its public IP address in a web browser, but in case we forgot what that was, we can query for it with:

az vm show -d -g $ResourceGroupName -n $VmName --query "publicIps" -o tsv

Notice I like to use the tab separated output for this when I’m getting just a single value, as it allows me to easily store the result in a variable. You can learn more about this in my blog on using queries with the Azure CLI.

Tip: if your custom script fails for some reason, you can troubleshoot by RDPing into the machine and looking at the logs in the “C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension” folder.

4. Stopping or Deleting the VM

Obviously, once you’ve created your Virtual Machine you’re going to be paying for it, and that can be very expensive if you chose a powerful VM size. You might think that you could save money by stopping it with az vm stop, but you’d be wrong. You still pay for stopped VMs in Azure – they have to be “stopped deallocated” in order for you not to be billed.

Fortunately, there’s an easy way to put our VM into a stopped deallocated state with the Azure CLI:

az vm deallocate -n $VmName -g $ResourceGroupName

And we can check it worked with:

az vm show -d -g $ResourceGroupName -n $VmName --query "powerState" -o tsv

Of course, when you’re ready to use it again it’s a simple matter of az vm start to get it going again, and of course the public IP address will be different as it was relinquished when we deallocated the VM.

Finally, when you’re all done with the VM, you’ll need to clear up after yourself, and there’s a nice easy way to do that with az group delete, passing the --yes flag if you don’t want to be asked “are you sure”. Of course, only run this if you are sure – there’s no undo!

az group delete --name $ResourceGroupName --yes


The Azure CLI is a great tool to have at your disposal for all kinds of VM management tasks. There are loads more great examples on the Azure CLI docs site for both Windows and Linux VMs.


If you’re developing an Azure based application, chances are you make frequent visits to the Azure portal. Now hopefully you don’t use it exclusively – at the very least your deployment should be automated (ideally with ARM templates) so you have a reliable and repeatable way to provision your application.

However, if you’re like me, you often find yourself needing to dive into the portal to check the status or configuration value of some resource or other. And if you have a lot of resources in your subscription(s) it can take a lot of clicking around, changing subscription, and filtering by resource type and group before you find the information you’re after.

This is where the Azure CLI can really help you out.

This week I kept needing to RDP into some virtual machines. So I needed to get the public IP address, the port numbers from a load balancer, and the password from a key vault secret. After manually doing this a few times, it dawned on me that the Azure CLI would be ideal to automate this.

To get the public IP address, I just needed to use the az network public-ip show command, and query for just the ipAddress property with a simple --query parameter. For commands like this returning a single value, I use the tab separated output option, as there is no need for it to be formatted as JSON.

az network public-ip show -n mypublicip -g myresourcegroup --query ipAddress -o tsv

Getting the value of the secret from the keyvault is just as simple. It’s az keyvault secret show, and asking for the value property:

az keyvault secret show --vault-name mykeyvault --name mysecret --query value -o tsv

The only slightly tricky command was asking the load-balancer for the port values. The inboundNatRules property of the output from az network lb show is a JSON array, and I wanted to pick out just the frontendPort properties from each object in that array. The JMESPath syntax for that is inboundNatRules[].frontendPort. And with the tsv output option, I’ll get each port number on a separate line:

az network lb show -n myloadbalancer -g myresourcegroup --query "inboundNatRules[].frontendPort" -o tsv

And so with three simple Azure CLI commands, I’ve automated the task of getting the RDP connection settings from the portal. It may not seem like a big deal, but the time saved by automating these simple repetitive tasks can add up over time.

What task do you keep visiting the Azure portal for that you could automate?


One of the great things about Azure is that there is an incredible amount of choice about how you go about managing your resources. Whether you want to create a new web app, configure a network security group, or stop a virtual machine, there’s plenty of choices for how to accomplish that.

Here’s a quick summary of what your options are, and why you might choose them:

1. Use the Portal

The first option is actually two, because there’s a “new” portal (portal.azure.com) and the “classic portal” (manage.windowsazure.com). Thankfully, there’s very little reason to visit the classic portal anymore – almost everything can be done in the new portal.

The Azure portal is a great choice when you’re just wanting to explore. You can look around the resources you have deployed, navigate into their various property pages, and quite often you’ll discover settings to configure features you didn’t even know were there. Here’s just a few of the things you can set up for a web-app for example.


The portal’s also great for things like browsing the list of available images for Virtual machines.

But! While the portal is great for experimenting and exploring, it’s not the best choice for deploying of application, or any task you need to implement repeatedly. For those sort of things, it’s much better to use a technique that can be automated.

And that’s where the other options come in.

2. Use Azure PowerShell

Azure PowerShell is a comprehensive collection of PowerShell cmdlets that let you do pretty much anything with Azure. The commands uses the conventional PowerShell prefixes of New-, Get-Set- and Remove-, for create, read, update and delete operations.

For example, here’s a sample Azure PowerShell script that creates a web app and deploys code from GitHub

$location="West Europe"

# Create a resource group.
New-AzureRmResourceGroup -Name myResourceGroup -Location $location

# Create an App Service plan in Free tier.
New-AzureRmAppServicePlan -Name $webappname -Location $location -ResourceGroupName myResourceGroup -Tier Free

# Create a web app.
New-AzureRmWebApp -Name $webappname -Location $location -AppServicePlan $webappname -ResourceGroupName myResourceGroup

# Configure GitHub deployment from your GitHub repo and deploy once.
$PropertiesObject = @{
    repoUrl = "$gitrepo";
    branch = "master";
    isManualIntegration = "true";
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName myResourceGroup -ResourceType Microsoft.Web/sites/sourcecontrols -ResourceName $webappname/web -ApiVersion 2015-08-01 -Force

All the major Azure services have PowerShell cmdlets available, so pretty much anything you can do in the portal can be automated with it. For more examples of Azure PowerShell in action, check out Elton Stoneman’s Pluralsight course Managing IaaS with PowerShell.

Of course, not all developers know PowerShell and it does have its quirks, but if you already know some PowerShell or are willing to learn, this is a great way to automate your resource management. And its not necessarily a Windows only thing now – Azure PowerShell is available on Mac and Linux (currently in beta).

3. Use the Azure CLI

Alternatively, if you’re a Max or Linux user, or just prefer a bash shell to PowerShell, the Azure CLI 2.0 might be for you. It’s a fully cross-platform command-line experience, and despite being newer, already has capabilities on a par with Azure PowerShell.

The commands typically take the form az command subcommand options which makes the API very easy to explore. Just type az and get a list of the main commands. Type az webapp and see what you can do with webapps. There’s even an interactive mode which gives you a sort of intellisense experience at the command prompt!

By default the commands will emit JSON, although it supports a few output formats. To give a feel for how it compares to PowerShell, here’s an Azure CLI bash script to perform the same task of creating a web app and deploying code from GitHub:


# Replace the following URL with a public GitHub repo URL

# Create a resource group.
az group create --location westeurope --name myResourceGroup

# Create an App Service plan in `FREE` tier.
az appservice plan create --name $webappname --resource-group myResourceGroup --sku FREE

# Create a web app.
az webapp create --name $webappname --resource-group myResourceGroup --plan $webappname

# Deploy code from a public GitHub repository. 
az webapp deployment source config --name $webappname --resource-group myResourceGroup \
--repo-url $gitrepo --branch master --manual-integration

Obviously the capabilities and use cases for the Azure CLI and Azure PowerShell are very similar so its a hard call which one to use. Azure CLI would probably be a better choice if you prefer bash to PowerShell, if you’re working cross-platform, and also has the benefit of being open source, so it is picking up plenty of community contributed features.

4. Use the Azure Management Libraries for .NET

The Azure management libraries for .NET allow you to write code in C# (or your favourite .NET language) to manage your Azure resources. This is a great choice if you’re more comfortable with C# than scripting language and there is a lot of business logic surrounding the task you want to automate.

The libraries are available on NuGet and support a fluent interface allowing you to write code looking like this:

var sql = azure.SqlServers.Define(sqlServerName)

As you can see, it’s very easy to work with. The down-sides are that it doesn’t have the same comprehensive coverage of Azure features and services that the CLI and PowerShell have, although hopefully that will change in the future (they’re open source too).

You will also need to go through the somewhat convoluted process of creating a service principal (instructions here) to create credentials your code can log in with.

But if you wanted to create say an Azure Function that ran on a timer to start or stop virtual machines, this would be a great choice.

5. … or for your favourite language

If you’re a Java programmer, there’s an Azure Java API. There’s also a node.js Azure SDK, a set of Azure Python libraries, and an Azure client library for Ruby. Like with the .NET libraries, you’ll want to create a service principal to use for login purposes, but then you’ll be able to manage Azure resources using your language of choice.

6. Use the Azure REST API directly

So far we’ve discussed five ways to manage your Azure resources – the portal, PowerShell, the CLI and the SDKs for a whole host of popular programming languages. But what they all have in common is that under the hood, they’re making calls to the Azure REST API.

What this means is that you can access the full range of capabilities of Azure, using any language that can make HTTP requests, by calling the REST API directly yourself. Obviously its a little less user-friendly than the other options we’ve discussed so far, but the documentation is quite comprehensive so you should be able to work out the correct headers and payload for each request.

Here’s a sample request to the REST API:

PUT /subscriptions/03f09293-ce69-483a-a092-d06ea46dfb8c/resourcegroups/ExampleResourceGroup?api-version=2016-02-01  HTTP/1.1
Authorization: Bearer <bearer-token>
Content-Length: 29
Content-Type: application/json
Host: management.azure.com

  "location": "West US"

Bonus 7 - Use ARM Templates

Finally, this isn’t really an alternative to the previous options, but should be used in conjunction with them.

For example, creating a new virtual machine often involves creating several resources – a virtual network, a public IP address, a network security group, a storage account for the disk, and the virtual machine itself. Whilst you could write a script with a separate command for each of these items, a better approach is to create a single Azure Resource Manager template, which can be deployed with a single command.

There are several benefits here – the template can be parameterized and Azure can create resources in parallel for faster deployment. Deploying a template will also only create resources that are missing in the target resource group so can greatly simplify incremental deployments.

To get an idea for the sort of thing you can achieve with an ARM template, check out these samples – there are some very impressive demos, which you can try yourself easily simply by clicking the “deploy to Azure” button for each sample, and within minutes you have the sample deployed. If you’re worried about cost – cleaning up is really easy – just delete the resource group and everything that you created is gone. And since pricing in Azure is per minute, you’ll often end up paying hardly anything for a quick experimental deployment of a template.

Each of the six previous techniques we’ve discussed offer you a way to deploy using an ARM template, so whichever of those you use, once you’ve finalised what resources make up a deployment, it’s well worth your time creating a template to make deployments simple and repeatable.