0 Comments

Last time in my series on the Azure CLI, we saw how to create a SQL Database and connect it to a web app. But lets see now how we can automate backing up to a .bacpac file in blob storage, and how we can restore from a .bacpac.

Gathering Backup Parameters

To perform a backup we need details of how to connect to our SQL database and to the target storage account.

I’ll assume we’ve already got some variables set up containing the SQL database connection information. Here’s the values we used in our last demo:

sqlServerName="azclidemo"
sqlServerUsername="mheath"
sqlServerPassword='[email protected]'
databaseName="SnippetsDatabase"

We also need to get hold of some connection details of the target storage account. This may well be in another resource group (and its a good idea if it is, as that allows you to easily delete the database independently of its backups).

We need just the AccountKey part of the connection string to the storage account, and the Azure CLI only gives us the whole connection string with az storage account show-connection-string, so in bash we can pipe that into grep to pick out just the AccountKey part at the end (obviously if you’re using PowerShell you can use the string manipulation / regex features it provides instead):

storageAccount="assetswe"
storageResourceGroup="SharedAssets"
storageConnectionString=`az storage account show-connection-string -n $storageAccount -g $storageResourceGroup --query connectionString -o tsv`

# extract just the storage key
storageKey=`echo $storageConnectionString | grep -oP "AccountKey=\K.+"`

And it would be a good idea if we gave our backup a unique filename, which we can do with a bit more bash:

now=$(date +"%Y-%m-%d-%H-%M")
backupFileName="backup-$now.bacpac"

Performing the Database Backup

Now we’ve gathered all the necessary information, we can perform the backup with a single az sql db export command, providing details of how to connect to both the database to be backed up, and the storage account to back up to. Here we’re backing up into a bacpacs container in our storage account:

az sql db export -s $sqlServerName -n $databaseName -g $resourceGroup \
-u $sqlServerUsername -p $sqlServerPassword \
--storage-key-type StorageAccessKey --storage-key $storageKey \
--storage-uri "https://$storageAccount.blob.core.windows.net/bacpacs/$backupFileName"

If you’re wondering whether it’s possible to use SAS tokens instead of providing the storage account storage key, the answer is yes you can, although I had troubles when I used this command from PowerShell not interpreting the SAS token correctly. So I found the storage key option more reliable.

Restoring a Backup

One thing that may take you by surprise is that you can’t just restore over the top of an existing database. You have to create a new empty database and restore into that. Presumably this is just to stop people accidentally destroying live data.

It’s easy enough to create a new database on the same server as the original:

databaseName2="SnippetsDatabase2"
az sql db create -g $resourceGroup -s $sqlServerName -n $databaseName2 \
--service-objective Basic

And now we can use the az sql db import command to import from our .bacpac file into the new database. The arguments are almost identical to the export command:

az sql db import -s $sqlServerName -n $databaseName2 -g $resourceGroup \
-u $sqlServerUsername -p $sqlServerPassword \
--storage-key-type StorageAccessKey --storage-key $storageKey \
--storage-uri "https://$storageAccount.blob.core.windows.net/bacpacs/$backupFileName" 

Now we can generate a new connection string, and update our web app to point at the restored database:

connectionString2="Server=tcp:$sqlServerName.database.windows.net;Database=$databaseName2;User [email protected]$sqlServerName;Password=$sqlServerPassword;Trusted_Connection=False;Encrypt=True;"

az webapp config connection-string set \
    -n $appName -g $resourceGroup \
    --settings "SnippetsContext=$connectionString2" \
    --connection-string-type SQLAzure

Taking it Further

In this post we saw how to create and restore our own .bacpac files on demand, but you can also take advantage of the automatic backups that Azure SQL Database creates on your behalf. How far back these backups go depends on the pricing tier you choose, but if you want to restore from one of these, take a look at the az sql db restore command.

Previous installments in my Azure CLI tutorial series:

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

Continuing my series on the Azure CLI, today I want to show how we can create a Web App and a SQL Database and connect the two together.

Create a Web App

First up, let’s create a web app. I’ll create a resource group to put it in, an app service plan with az appservice plan create (using basic hosting tier) an then create the web app itself with az webapp create.

resourceGroup="CliWebAppDemo"
location="westeurope"
appName="azcliwebappdemo"
planName="CliWebAppDemo"

# create resource group
az group create -n $resourceGroup -l $location

# create the app service plan
# allowed sku values B1, B2, B3, D1, F1, FREE, P1, P1V2, P2, P2V2, P3, P3V2, S1, S2, S3, SHARED.
az appservice plan create -n $planName -g $resourceGroup -l $location --sku B1

# create the webapp
az webapp create -n $appName -g $resourceGroup --plan $planName

Deploy From GitHub

Next, let’s set our web app up to deploy code from GitHub. We’ll use this simple ASP.NET Core project that uses EF Core to talk to a SQL database.

Web apps support many different deployment methods, and there are also multiple ways to set up Git deployment. Here, I’m going for the simple --manual-integration option, which only re-syncs when you explicitly ask it to.

We can use az webapp deployment source config, to tell our webapp where the repository is and what branch to sync from. This will cause it to download and build the code.

gitrepo="https://github.com/markheath/azure-cli-snippets"

az webapp deployment source config -n $appName -g $resourceGroup \
    --repo-url $gitrepo --branch master --manual-integration

If we’ve pushed more changes to our GitHub repository, we can trigger a re-sync with:

az webapp deployment source sync -n $appName -g $resourceGroup

Create a SQL Database

Our demo application expects to talk to a SQL database, so we need to create one. This is a two-step process. We need to create a SQL server with az sql server create, supplying a user name and password for the administrator, and then create a database with az sql db create. When we create the database, we can choose the pricing tier with the --service-objective argument.

sqlServerName="azclidemo"
sqlServerUsername="mheath"
sqlServerPassword='[email protected]'

# create the SQL server
az sql server create -n $sqlServerName -g $resourceGroup \
            -l $location -u $sqlServerUsername -p $sqlServerPassword

databaseName="SnippetsDatabase"

# create the database
az sql db create -g $resourceGroup -s $sqlServerName -n $databaseName \
          --service-objective Basic

Connect the Web App to the Database

To connect our web app to the new database we just created, we need to give the web app the connection string, and we also need to create a firewall rule to allow the web app access to the SQL server, as it is locked down by default.

To find out what IP addresses our web app is using we can use:

az webapp show -n $appName -g $resourceGroup --query "outboundIpAddresses" \
               -o tsv

What you’ll find is that this returns a comma separated list of four IP addresses. So we really ought to create a rule for each one of those. However, there is a short-cut (albeit not as secure) – we can specify an IP address of 0.0.0.0 which means, “allow any internal traffic coming from within an Azure datacenter”.

This allows us to create a firewall rule like this:

az sql server firewall-rule create -g $resourceGroup -s $sqlServerName \
     -n AllowWebApp1 --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

There doesn’t seem to be a CLI command to ask for the database connection string, but we can derive it anyway from information we already know:

connectionString="Server=tcp:$sqlServerName.database.windows.net;Database=$databaseName;User [email protected]$sqlServerName;Password=$sqlServerPassword;Trusted_Connection=False;Encrypt=True;"

And now we can provide the connection string to our web app, giving it the name “SnippetsContext” which is what my sample application is expecting:

az webapp config connection-string set \
    -n $appName -g $resourceGroup \
    --settings "SnippetsContext=$connectionString" \
    --connection-string-type SQLAzure

Now, if you’ve followed along these instructions, you can visit the site at the /migrate endpoint to trigger it to run the initial migrations. Once that’s done, the main site will be all ready to use. You can get the base URI for the web app with the following command:

az webapp show -n $appName -g $resourceGroup --query "defaultHostName"

Summary

As you can see, it’s really very simple to not just create web apps and SQL databases with the CLI, but to configure things like deployment settings, connection strings and server firewall rules.

Previous entries in my Azure CLI series of posts…

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

In this post I’ll continue my series on the Azure CLI with a look at how you can manage storage queues and messages. Previous posts in the series are here:

I’m going to assume that we’ve already created a storage account and put the connection string into an environment variable called AZURE_STORAGE_CONNECTION_STRING to save us passing the --connection-string argument to each command. Read my previous post for more details on how to do this.

Creating Queues

To check if a queue exists:

$queueName = "myqueue"
az storage queue exists -n $queueName

And to actually create it (safe to call if the queue already exists):

az storage queue create -n $queueName

Posting Messages to a Queue

Now normally you wouldn’t be actually posting messages to queues from the CLI, but it might be useful for testing purposes or to trigger some kind of maintenance task. Here’s how you can post a message to a queue:

az storage message put --content "Hello from CLI" -q $queueName

You can also supply a time to live duration with the --time-to-live argument which should be specified in seconds.

Retrieving Messages from a Queue

Receiving messages from the queue is a two-step process. First you use az storage message get to mark a message on the queue as being locked for a set period of time (the “visibility timeout”). You then process the message and call az storage message delete to delete the message from the queue. If you need more time, you can use az storage message update to increase the visibility timeout for your message. If you fail to call either delete or update before the visibility timeout expires, your message becomes visible again for someone else to receive.

Here’s how we can get a message from the queue with a two minute visibility timeout:

az storage message get -q $queueName --visibility-timeout 120

The output from the get command includes the id and pop receiptof the message. These are important as they need to be used in the call to delete (or update). Here’s how we can delete the message for a specific message:

az storage message delete --id "2a1d4311-c952-4199-94ac-801930da31c7" \
        --pop-receipt "AgAAAAMAAAAAAAAAuSuJ51RD0wE=" -q $queueName

Again, it’s unlikely you’d often need to write code to read messages from queues from the command line, but it might be useful for diagnostic purposes if you wanted to write a script to resubmit certain messages from a poison message queue for example.

Learning More

As usual the best place to learn more is the official docs – in particular the az storage queue and az storage message groups of commands.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

I’ve been exploring the capabilities of the Azure CLI recently and today I’m going to look at working with blob storage. To catch up on previous instalments check out these articles:

Creating a Storage Account

The first thing we want to do is create a storage account. We need to choose a “sku” – whether we need geo-redundant storage or not. I’m just creating the cheaper LRS tier in this example. I’m also making a new resource group first to put the storage account in.

$resourceGroup="MyStorageResourceGroup"
$location="westeurope"
$storageAccount="mystorageaccount"

# create our resource group
az group create -n $resourceGroup -l $location

# create a storage account
az storage account create -n $storageAccount -g $resourceGroup -l $location --sku Standard_LRS

Next, we need to get the connection string, which is needed for all operations on blobs and containers:

$connectionString=az storage account show-connection-string -n $storageAccount -g $resourceGroup --query connectionString -o tsv

A convenient feature of the CLI is that you can set the connection string as an environment variable to save having to pass the --connection-string parameter to every subsequent command.

Here’s how we do that in PowerShell:

$env:AZURE_STORAGE_CONNECTION_STRING = $connectionString

or if you’re in a bash shell:

export AZURE_STORAGE_CONNECTION_STRING=$connectionString

Creating Containers

Now we have a storage account, we can create some containers. The --public-access flag allows us to set their privacy level. The default is off for a private container, or you can set it to blob for public access to blobs. There’s also a container level which also allows people to list the contents of the container.

I’ll create a public and a private container:

az storage container create -n "public" --public-access blob
az storage container create -n "private" --public-access off

Uploading files

Uploading a file into your container is easy with the az storage blob upload command. You simply specify the name of the file to upload, the container to upload it into, and the name of the blob.

Here’s uploading a file into the public container and getting the URL from which it can be accessed:

# create a demo file
echo "Hello World" > example.txt

$blobName = "folder/public.txt"

# upload the demo file to the public container
az storage blob upload -c "public" -f "example.txt" -n $blobName

# get the URL of the blob
az storage blob url -c "public" -n $blobName -o tsv

If we upload a file to the private container, we’ll need to also generate a SAS token in order to download it via a URL. We do that with az storage blob generate-sas, passing in an expiry date and the access permissions (in our case, we just need r for read access).

$blobName = "folder/private.txt"

# upload the demo file to a private container
az storage blob upload -c "private" -f "example.txt" -n $blobName

# get the blob URL
$url = az storage blob url -c "private" -n $blobName -o tsv

# generate a read-only SAS token
$sas = az storage blob generate-sas -c "private" -n $blobName `
    --permissions r -o tsv `
    --expiry 2017-10-15T17:00Z

# launch a browser to access the file
Start-Process "$($url)?$($sas)"

More Blob Operations

Of course, there’s much more you can do with blobs from the Azure CLI, and you can explore the full range of options with az storage blob –h. You’ll see that we can easily download or delete blobs, snapshot them, as well as manage their metadata or even work with leases.

Of course, for ad-hoc storage tasks, Azure Storage Explorer is still a great tool, but if as part of a deployment or maintenance task you need to upload or download blobs from containers, the CLI is a great way to automate that process and ensure it is reliable and repeatable.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments

I’ve been really impressed with the Azure CLI, and have been using it to automate all kinds of things recently. Here’s some instructions on how you can create and configure an Azure Virtual Machine using the CLI.

1. Pick an image and size

If you’re going to create a Virtual Machine, you need to do so from a base image. Azure has hundreds to choose from, and so you can use the az vm image list command with the --all flag specified in order to find a suitable one.

For example if I want to find all VM images with elasticsearch in the “offer” name I can use:

az vm image list --all -f elasticsearch -o table

Or if I know I want to use the VS-2017 “sku” for a Visual Studio 2017 VM I can use:

az vm image list -s VS-2017 --all -o table

You’ll also want to decide what VM size you want. There are loads available, but not necessarily all in in every region, so you can check what sizes are available in your location with the following command:

az vm list-sizes --location westeurope -o table

2. Create the VM

Resources which share a common lifetime should be in the same resource group. It makes sense to create your VM in its own resource group, so that when you’re done you can clear it up by deleting the resource group. So let’s create a resource group, using a variable containing its name for convenience in future commands:

ResourceGroupName="CreateVmDemo"
az group create --name $ResourceGroupName --location westeurope

Now we’re ready to create our VM in this resource group. There are loads of parameters to az vm create, which you can explore with the az vm create -h command. The good news is that lots of sensible defaults are picked for you so you don’t have to provide values for everything. However, it will choose for you to have a reasonably powerful VM with a managed disk by default, so if you want to keep costs down you might want to supply some money saving parameters like I show below.

In my example I’m using the Windows 2016 data center VM image and supplying my own username and password. I’m going for a smaller VM size and the cheaper option of using unmanaged disks.

VmName="ExampleVm"
AdminPassword="[email protected]@ssw0rd!"

az vm create \
    --resource-group $ResourceGroupName \
    --name $VmName \
    --image win2016datacenter \
    --admin-username azureuser \
    --admin-password $AdminPassword \
    --size Basic_A1 \
    --use-unmanaged-disk \
    --storage-sku Standard_LRS

This will take a few minutes to complete, and it’s created more than just a VM. There’s a network interface, a network security group, a public IP address and a disk (or storage account for the VHD if you chose unmanaged disk).

You can see everything that got created with:

az resource list -g $ResourceGroupName -o table

3. Configure the VM

For this demo, I’m going to show how we can configure the VM as a simple web server. First we need to ensure port 80 is open. We can do that easily with:

az vm open-port --port 80 --resource-group $ResourceGroupName --name $VmName

The next step is to install IIS and set up our website. Here’s a simple example of a PowerShell script I might want to run on the VM in order to get the website set up. It installs the IIS windows feature, then downloads a simple webpage from a public URL (but this could easily be securely downloading a zip with a Shared Access signature), then deletes the default website and creates a new website pointing at our custom web page.

Install-WindowsFeature -Name Web-Server
$sitePath = "c:\example-site"
$output = "$sitePath\index.html"
$url = "https://mystorage.blob.core.windows.net/public/index.html"
New-Item -ItemType Directory $sitePath -Force
Invoke-WebRequest -Uri $url -OutFile $output
Remove-Website -Name 'Default Web Site'; `
New-Website -Name 'example-site' `
         -Port 80 -PhysicalPath $sitePath

But how can we get this script to run on our virtual machine? Well, we can use the Custom Script Extension. This allows us to either provide a simple command to run, or if its more complex like this, a URI for a script to download and then run.

Here’s how we can invoke the custom script on our VM:

az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $VmName \
--resource-group $ResourceGroupName \
--settings '{"fileUris":["https://my-assets.blob.core.windows.net/public/SetupSimpleSite.ps1"],"commandToExecute":"powershell.exe -ExecutionPolicy Unrestricted -file SetupSimpleSite.ps1"}'

By the way, if you’re doing this from a PowerShell prompt instead of a bash prompt, getting the quotes escaped correctly in the settings parameter can be a real pain. I find its easier to just pass the path of a file containing the JSON like this

az vm extension set `
    --publisher Microsoft.Compute `
    --version 1.8 `
    --name CustomScriptExtension `
    --vm-name $VmName `
    --resource-group $ResourceGroupName `
    --settings extensionSettings.json

This will take a minute or two (and will be a bit slower if you chose the cheap options like I did), but assuming it completes successfully we now have our fully configured VM. To check it worked, we can visit its public IP address in a web browser, but in case we forgot what that was, we can query for it with:

az vm show -d -g $ResourceGroupName -n $VmName --query "publicIps" -o tsv

Notice I like to use the tab separated output for this when I’m getting just a single value, as it allows me to easily store the result in a variable. You can learn more about this in my blog on using queries with the Azure CLI.

Tip: if your custom script fails for some reason, you can troubleshoot by RDPing into the machine and looking at the logs in the “C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension” folder.

4. Stopping or Deleting the VM

Obviously, once you’ve created your Virtual Machine you’re going to be paying for it, and that can be very expensive if you chose a powerful VM size. You might think that you could save money by stopping it with az vm stop, but you’d be wrong. You still pay for stopped VMs in Azure – they have to be “stopped deallocated” in order for you not to be billed.

Fortunately, there’s an easy way to put our VM into a stopped deallocated state with the Azure CLI:

az vm deallocate -n $VmName -g $ResourceGroupName

And we can check it worked with:

az vm show -d -g $ResourceGroupName -n $VmName --query "powerState" -o tsv

Of course, when you’re ready to use it again it’s a simple matter of az vm start to get it going again, and of course the public IP address will be different as it was relinquished when we deallocated the VM.

Finally, when you’re all done with the VM, you’ll need to clear up after yourself, and there’s a nice easy way to do that with az group delete, passing the --yes flag if you don’t want to be asked “are you sure”. Of course, only run this if you are sure – there’s no undo!

az group delete --name $ResourceGroupName --yes

Summary

The Azure CLI is a great tool to have at your disposal for all kinds of VM management tasks. There are loads more great examples on the Azure CLI docs site for both Windows and Linux VMs.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.