0 Comments Posted in:

Azure Container Instances offer you a really easy way to run a single Docker container in the cloud. It's essentially serverless containers. But what if you wanted to use ACI to deploy an application that consists of multiple containers?

For example, to deploy a WordPress blog, you need two containers - one for the PHP website part of WordPress, and the other for the MySql database. Now it's perfectly possible to use ACI and deploy these as two separate containers (or more accurately "container groups"). First deploy the MySql container, exposing port 3306 and then deploy the WordPress container, giving it the IP address and credentials of the MySql container.

But that does mean I need to publicly expose the MySql port on the internet, where I could be vulnerable to people attempting to guess my password. Is there any way to deploy the MySql container so that only the WordPress container has access to it?

Container Groups

Well, as I mentioned in my post yesterday about Azure Container Instances, this is possible by using "Container Groups". If I declare that one or more containers are in a "container group", then they will get deployed on the same server,

There are some downsides to this approach, and container orchestrators like Kubernetes have better ways of solving this problem, but for a very simple serverless scenario where we might want to spin up two containers that can see each other for a short time, ACI container groups might be a good fit.

Currently, the Azure CLI container command does not have much support for working with container groups. Instead, we need to create an ARM template.

Container Group ARM Template

Let's have a look at the template to deploy WordPress and MySql into a container group, and then we'll discuss how it works.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "mysqlPassword": {
            "type": "securestring",
            "metadata": {
              "description": "Root password password for the MySQL database."
            }
          },
          "containerGroupName": {
              "type": "string",
              "defaultValue": "myContainerGroup",
              "metadata": {
                  "description": "Name for the container group"
              }
          },
          "dnsNameLabel": {
            "type": "string",
            "defaultValue": "aciwordpress",
            "metadata": {
                "description": "DNS Name Label for the container group"
            }
        }
    },
    "variables": {
      "container1name": "front-end",
      "container1image": "wordpress",
      "container2name": "back-end",
      "container2image": "mysql"
    },
    "resources": [
      {
        "name": "[parameters('containerGroupName')]",
        "type": "Microsoft.ContainerInstance/containerGroups",
        "apiVersion": "2018-02-01-preview",
        "location": "[resourceGroup().location]",
        "properties": {
          "containers": [
            {
                "name": "[variables('container1name')]",
                "properties": {
                  "image": "[variables('container1image')]",
                  "resources": {
                    "requests": {
                      "cpu": 1,
                      "memoryInGb": 1.0
                    }
                  },
                  "ports": [
                    {
                      "port": 80
                    }
                  ],
                  "environmentVariables": [
                      {
                          "name": "WORDPRESS_DB_PASSWORD",
                          "value": "[parameters('mysqlPassword')]"
                      },
                      {
                          "name": "WORDPRESS_DB_HOST",
                          "value": "127.0.0.1:3306"
                      }
                  ]
                }
              },
            {
                "name": "[variables('container2name')]",
                "properties": {
                  "image": "[variables('container2image')]",
                  "resources": {
                    "requests": {
                      "cpu": 1,
                      "memoryInGb": 1.0
                    }
                  },
                  "ports": [
                      {
                          "protocol": "tcp",
                          "port": "3306"
                      }
                  ],
                  "environmentVariables": [
                      {
                          "name": "MYSQL_ROOT_PASSWORD",
                          "value": "[parameters('mysqlPassword')]"
                      }
                  ]
                }
              }
          ],
          "osType": "Linux",
          "restartPolicy": "OnFailure",
          "ipAddress": {
            "type": "Public",
            "dnsNameLabel": "[parameters('dnsNameLabel')]",
            "ports": [
              {
                "protocol": "tcp",
                "port": "80"
              }
            ]
          }
        }
      }
    ],
    "outputs": {
      "containerIPv4Address": {
        "type": "string",
        "value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
      }
    }
  }

ARM Template Configuration

The template itself is straightforward enough, but the challenge with ARM templates is knowing what the names of all the various configuration options are. The documentation here proved helpful.

You can see that we've got one container group, with two containers in. Each container has its own environment variables. I allow the WordPress container to find the MySql container by using the WORDPRESS_DB_HOST environment variable. In theory this could be set to localhost:3306 but because of some other issue to do with PHP I needed to use 127.0.0.1:3306.

On the WordPress container I expose port 80 and on the MySql container port 3306, but the container group as a whole only exposes port 80, which helps keep the attack surface small.

I've also set up a restartPolicy of OnFailure as the WordPress container can exit if it can't talk to the MySql database, which might happen if it starts up first. And I configured a dnsNameLabel for the container group to give us a nice friendly domain name to access our WordPress container through.

Deploying the template

Deploying the ARM template is nice and easy with the Azure CLI. Just create a resource group, deploy the template into it, and then query for the container group domain name:

# create a resource group for our
$resourceGroup = "AciGroupDemo"
$location="westeurope"
az group create -n $resourceGroup -l $location

# deploy the container group ARM template passing in custom parameters
$containerGroupName = "myWordpress"
$dnsNameLabel = "wordpressaci"

az group deployment create `
    -n TestDeployment -g $resourceGroup `
    --template-file "aci-wordpress.json" `
    --parameters '[email protected]!' `
    --parameters "containerGroupName=$containerGroupName" `
    --parameters "dnsNameLabel=$dnsNameLabel"

# get the domain name of our wordpress container
az container show -g $resourceGroup -n $containerGroupName `
        --query ipAddress.fqdn -o tsv

Summary

In this post I've been exploring the capabilities of container groups, rather than recommending best practices. As I said yesterday, they're not the best fit if you want long-running containers, as there are more cost-effective options. They're also not intended as a replacement for container orchestrators, but can be useful when you have a group of containers that work together and need to be deployed as a logical group. The need for ARM templates means they're not super easy to work with at the moment, but hopefully that will change as the tooling improves.

The code for this demo, along with a bunch of my other Azure Docker demos can be found here on GitHub.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments Posted in:

Regular readers of my blog will know that I'm very interested in the ideas of serverless architectures and containers. Azure offers many services that enable both both. For serverless we have Azure Functions, while for containers there's the Azure Container (Kubernetes) Service which looks very promising. And there are multiple other Azure services enabling both serverless and containerized approaches.

Serverless + Containers = ACI

But I suspect like many people, I've been hoping that something would come along that combines both the ideas of "serverless" and "containers", and Azure Container Instances is exactly that. With Azure Container Instances you can run containers without needing to manage, or even to pre-provision any servers. Just ask for your container to run, and let Azure worry about where it is hosted.

ACI also uses a serverless pricing model. I only want to pay for what I use. Instead of having a bunch of VMs that I have to pay for even if they're sitting idle, with ACI, the billing model is per-second. I pay only for the time my container runs.

ACI Capabilities

Azure Container Instances already support both Windows and Linux containers, and for cases where you need a more powerful server, they let you specify how much RAM and how many CPU cores you want allocated. Your containers can have public IP addresses, and use images from private repositories.

You can mount Azure File Shares as volumes, and several other volume mounting options are available, including cloning a Git repository which opens up some interesting possibilities. Hopefully a way to mount subdirectories within File Shares, or mounting blob storage will follow soon.

There's a concept of "container groups" where you can run a bunch of containers co-located on the same host. I'm not 100% sure exactly what the intended use case for this is, and it currently requires you to deploy with ARM templates which isn't particularly user friendly, but I have successfully deployed Wordpress and MySql in a container group (blog post to follow soon!).

ACI is still in preview and there are a few missing capabilities. At the moment, there doesn't seem to be an easy way to restart a stopped container, or execute a command against an existing container. Virtual Network integration is also not available yet.

Get Started with the Azure CLI

If you want to try Azure Container Instances out, it really is remarkably easy. You just need to know how to use the Azure CLI, and then its literally one command. OK, two if you count creating a resource group to put your container in first!

Here's a very simple demo that I used for my recent talk at the Docker Southampton meetup. It deploys the miniblog.core ASP.NET Core application in a Linux container:

# create a resource group
$location = "westeurope"
$resourceGroup = "miniblogaci"
az group create -l $location -n $resourceGroup

# create the container
$dockerRepo = "markheath/miniblogcore:v1-linux"
$containerName = "miniblogcore"
$dnsName = "dockersoton1"
az container create -n $containerName --image $dockerRepo -g $resourceGroup `
                    --ip-address public --ports 80 --dns-name-label $dnsName

What about if you wanted to run a Windows container instead? That's easy enough - just point at a Windows container image and use the os-type Windows flag:

$dockerRepo = "markheath/miniblogcore:v1"
$containerName="miniblogcorewin"
az container create -n $containerName --image $dockerRepo -g $resourceGroup `
                    --ip-address public --ports 80 --os-type Windows

Here's a quick demo I made of creating this container with the Azure CLI:

What should I use this for?

Is ACI the future of running Docker containers in the cloud? Well no, it's not a great choice for long-running containers (e.g. websites), as it will typically be more cost effective to run those on your own Docker host shared with other containers. Where the cost benefits shine is for short-lived workloads, where most of the time you're not doing anything, but suddenly you need to do some work. Good examples might be CI builds or media transcoding tasks. In these cases, you want a container that runs for a short period of time, but after its finished there's no need to keep paying for compute resource.

ACI is also not intended to replace container orchestration technologies like Kubernetes. But it could be used as a quick way of adding extra capacity to a Kubernetes cluster, enabling it to cope with short-lived spikes in demand. This could give you the best of both worlds - cost effectiveness for hosting long-lived containers, but with the ability to rapidly scale up and pay only for the excess capacity that you actually require to handle peak loads.

ACI and Azure Functions

How does ACI fit with Azure Functions? I remain a big fan of Azure Functions, and it's my go-to platform for serverless workloads in the cloud, but there are occasions when a containerized "function" would be a better fit. For example, the Azure Functions runtime currently limits you to 5 minutes runtime for a function (at least in consumption mode), restricts what you can install on the host server, and doesn't let you mount Azure File Shares. So an ACI container could be used to implement serverless tasks that don't fit so well with the existing functions runtime.

In fact, I think that ACI could play really nicely with the new Durable Functions extension to Azure Functions. I do a lot of work with media processing pipelines and some of the activities in a media processing workflow would be better suited to running in a container than being written as a regular Azure Function. It would be great if Azure Functions offered an easy way to trigger a workflow activity as an ACI container, and could be notified when that container finishes (maybe via an Azure Event Grid notification.

Anyway, it's exciting to see the possibilities that are being opened up by the worlds of containerization and serverless colliding and I look forward to seeing how the likes of ACI, AKS, Durable Functions and Logic Apps evolve to give us a truly productive and cost-effective platform for running all kinds of serverless workloads.

Want to learn more about how to build serverless applications in Azure? Be sure to check out my Pluralsight course Building Serverless Applications in Azure.

0 Comments Posted in:

I've posted before about how you can deploy a WebApp as a zip with the Kudu zip deploy API. It's a great way to deploy web apps and is one of the techniques I discuss for deploying miniblog.core.

But as well as allowing us to deploy our web apps, Kudu has an API for managing webjobs. With this API we can deploy and update new webjobs individually, as well as triggering them, configuring settings and even getting their execution history.

Three types of webjob

There are three main types of webjob that you can use. First there are triggered webjobs. These are webjobs that run on demand. Typically they will simply be a console app. You can trigger an execution of one of these webjobs with the Kudu webjobs API (we'll see an example later). A typical use case for this type of webjob might be some kind of support tool that you want to run on demand.

The second type is a scheduled webjob, which is actually just a triggered webjob with a schedule cron expression. The schedule is defined in a settings.json file that sits alongside your webjob executable. This type of webjob is great for periodic cleanup tasks that you need to run on a regular basis without explicitly needing to do anything to trigger them.

Finally there is a continuous webjob. This is an executable that will be run continuously - that is, it will be restarted for you if it exits. This is great for webjobs that are responding to queue messages. The webjob sits listening on one (or many) queues, and performs an action when a message appears on that queue. There's a helpful SDK that makes it easier to build this type of webjob, although I won't be discussing the use of that today.

Where are webjobs stored?

Creating a webjob simply involves dumping our webjob binaries into specially named folders. For a triggered (or scheduled) job, the folder is wwwroot\app_data\jobs\triggered\{job name}, and for a continuous job, it's wwwroot\app_data\jobs\continuous\{job name}. The webjobs host will look inside that folder and attempt to work out what executable it should run (based on a set of naming conventions).

Why the app_data folder? Well that's a special ASP.NET folder that is intended for storing your application data. The web server will not serve up the contents of this folder, so everything in there is safe. It's also considered a special case for deployments - since it might contain application generated data files, its contents won't get deleted or reset when you deploy a new version of your app.

An example scenario

Let's consider a very simple example where we have two web jobs that we want to host. One is a .NET core executable (Webjob1), the other is a regular .NET 4.6.2 framework console app (Webjob2). And we'll also deploy a ASP.NET Core Web API, just to show that you can host web jobs in the same "Azure Web App" instance as a regular web app, although you don't have to.

We'll use a combination of the Azure CLI and PowerShell for all the deployments, but these techniques can be used with anything that can make zip files and web requests.

Step 1 - Creating a web application

As always, with the Azure CLI, make sure you're logged in and have the right subscription selected first.

# log in to Azure CLI
az login
# make sure we are using the correct subscription
az account set -s "MySub"

And now let's create ourselves a resource group with an app service plan (free tier is fine here) and a webapp:

$resourceGroup = "WebJobsDemo"
$location = "North Europe"
$appName = "webjobsdemo"
$planName = "webjobsdemoplan"
$planSku = "F1" # allowed sku values B1, B2, B3, D1, F1, FREE, P1, P1V2, P2, P2V2, P3, P3V2, S1, S2, S3, SHARED.

# create resource group
az group create -n $resourceGroup -l $location

# create the app service plan
az appservice plan create -n $planName -g $resourceGroup -l $location --sku $planSku

# create the webapp
az webapp create -n $appName -g $resourceGroup --plan $planName

Step 2 - Get deployment credentials

We'll need the deployment credentials in order to call the Kudu web APIs. These can be easily retrieved with the Azure CLI making use of the query syntax which I discuss in my Azure CLI: Getting Started Pluralsight course

# get the credentials for deployment
$user = az webapp deployment list-publishing-profiles -n $appName -g $resourceGroup `
    --query "[?publishMethod=='MSDeploy'].userName" -o tsv

$pass = az webapp deployment list-publishing-profiles -n $appName -g $resourceGroup `
    --query "[?publishMethod=='MSDeploy'].userPWD" -o tsv

Step 3 - Build and zip the main web API

As I said, there is no requirement for our Azure "webapp" to actually contain a webapp. It could just host a bunch of webjobs. But to show that the two can co-exist, let's build and zip an ASP.NET Core web api application. I'm just using a very basic example app created with dotnet new webapi. We're using some .NET objects in PowerShell to perform the zip.

$publishFolder = "publish"

# publish the main API
dotnet publish MyWebApi -c Release -o $publishFolder

# make the zip for main API
$mainApiZip = "publish.zip"
if(Test-path $mainApiZip) {Remove-item $mainApiZip}
Add-Type -assembly "system.io.compression.filesystem"
[io.compression.zipfile]::CreateFromDirectory($publishFolder, $mainApiZip)

Step 4 - Deploy with Kudi zip deploy

The Azure CLI offers us a really nice and easy way to use the Kudu zip deploy API. We simly need to use the config-zip deployment source:

az webapp deployment source config-zip -n $appName -g $resourceGroup --src $mainApiZip

However, a regression in the Azure CLI 2.0.25 meant this was broken, so as an alternative you can just call the API directly with the following code, passing the credentials we retrieved earlier.

# set up deployment credentials
$creds = "$($user):$($pass)"
$encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($creds))
$basicAuthValue = "Basic $encodedCreds"

$Headers = @{
    Authorization = $basicAuthValue
}

# use kudu deploy from zip file
Invoke-WebRequest -Uri https://$appName.scm.azurewebsites.net/api/zipdeploy -Headers $Headers `
    -InFile $mainApiZip -ContentType "multipart/form-data" -Method Post

If we want to verify that the deployment worked, we can get the URI of the web app, and call the values controller (which the default webapi template created for us):

# check its working
$apiUri = az webapp show -n $appName -g $resourceGroup --query "defaultHostName" -o tsv
Start-Process https://$apiUri/api/values

Step 5 - Build our first webjob

In our demo scenario we have two webjobs. The first (Webjob1) is a .NET Core command line app. The code is very simple, just echoing a message and including command line arguments

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Hello from Task 1 (.NET Core) with args [{0}]!", 
            string.Join('|',args));
    }
}

Since .NET Core apps are just DLLs, we need to help the webjobs host to know how to run it by creating a run.cmd batch file that calls the dotnet runtime and passes on any command line arguments. Note: You can get weird errors here if you have a UTF-8 encoded file. Make sure you save this batch file as ASCII.

@echo off
dotnet Webjob1.dll %*

Building and zipping this webjob is no different to what we did with the main web API:

# now lets build the .NET core webjob
dotnet publish Webjob1 -c Release

$task1zip = "task1.zip"
if(Test-path $task1zip) {Remove-item $task1zip}
[io.compression.zipfile]::CreateFromDirectory("Webjob1\bin\Release\netcoreapp2.0\publish\", $task1zip)

Step 6 - Deploy the webjob

Deploying a webjob using the Kudu Webjobs API is very similar to zip deploying the main webapp. We simpply need to provide one extra Content-Disposition header, and we use the PUT verb. We indicate that this is going to be a triggered web job, by including triggeredwebjobs in the path, and we also include the webjob name (in this case "Webjob1")

$ZipHeaders = @{
    Authorization = $basicAuthValue
    "Content-Disposition" = "attachment; filename=run.cmd"
}

# upload the job using the Kudu WebJobs API
Invoke-WebRequest -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/Webjob1 -Headers $ZipHeaders `
    -InFile $task1zip -ContentType "application/zip" -Method Put

To check it worked, you can visit the Kudu portal and explore the contents of the app_data folder or look at the web jobs page.

# launch Kudu portal
Start-Process https://$appName.scm.azurewebsites.net

We can also check by calling another web jobs API method to get all triggered jobs:

# get triggered jobs
Invoke-RestMethod -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs -Headers $Headers `
    -Method Get

Step 7 - Run the webjob

To run the webjob we can POST to the run endpoint for this triggered webjob. And we can optionally pass arguments in the query string. Don't forget to provide the content type or you'll get a 403 error.

# run the job
$resp = Invoke-WebRequest -Uri "https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/Webjob1/run?arguments=eggs bacon" -Headers $Headers `
    -Method Post -ContentType "multipart/form-data"

Assuming this worked, we'll get a 202 back, and it will include the URI of a job instance we can use to query the output of this job. From the output of that request we'll also get a URI we can call to request the log output, which we can use to see that our webjob successfully ran and got the arguments we passed it:

# output response includes a Location to get history:
if ($resp.RawContent -match "\nLocation\: (.+)\n")
{
    $historyLocation = $matches[1]
    $hist = Invoke-RestMethod -Uri $historyLocation -Headers $Headers -Method Get
    # $hist has status, start_time, end_time, duration, error_url etc
    # get the logs from output_url
    Invoke-RestMethod -Uri $hist.output_url -Headers $Headers -Method Get
}

We can also ask for all runs of this webjob with the /history endpoint:

# get history of all runs for this webjob
Invoke-RestMethod -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/Webjob1/history -Headers $Headers `
    -Method Get

Step 8 - Deploy and configure a scheduled webjob

For our second webjob, we're using a regular .NET console app running on the regular .NET framework. Here's the code

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Hello from task 2 (.NET Framework) with args [{0}]", 
            string.Join("|", args));
    }
}

We'll build it with MSBuild and create a zip, very similar to what we did with the first webjob:

# build the regular .net webjob
$msbuild = "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\msbuild.exe"
. $msbuild "Webjob2\Webjob2.csproj" /property:Configuration=Release

$task2zip = "task2.zip"
if(Test-path $task2zip) {Remove-item $task2zip}
[io.compression.zipfile]::CreateFromDirectory("Webjob2\bin\Release\", $task2zip)

And then upload it just like we did with the first webjob. Remember a "scheduled" webjob is just a special case of triggered webjob, so we use the triggeredwebjobs endpoint again:

# upload the web job
$ZipHeaders = @{
    Authorization = $basicAuthValue
    "Content-Disposition" = "attachment; filename=Webjob2.exe"
}

Invoke-WebRequest -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/Webjob2 -Headers $ZipHeaders `
    -InFile $task2zip -ContentType "application/zip" -Method Put

Now if we'd included a settings.json in our zip file, with a cron expression, then this would already be a scheduled job with nothing further to do. But there's a very handy /settings endpoint that lets us push the contents of the settings file, which we can use to set the schedule. Here we'll set up our second webjob to run every five minutes.

$schedule = '{
  "schedule": "0 */5 * * * *"
}'

Invoke-RestMethod -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/Webjob2/settings -Headers $Headers `
    -Method Put -Body $schedule -ContentType "application/json"

The great thing about this approach is that we can change the schedule without having to push the whole webjob again. And even though this webjob is on a schedule, there's nothing to stop us running it on-demand as well if we want to.

Updating and deleting webjobs

It's very easy to update webjobs (or indeed the main API). You just zip up your new version of the webjob exactly as we did before and upload it through the API. The webjobs are left intact when a new version of the main web app is deployed, so it's safe to update that as well with the zip deploy API.

You can also easily delete webjobs if you no longer need them:

Invoke-WebRequest -Uri https://$appName.scm.azurewebsites.net/api/triggeredwebjobs/WebJob2 -Headers $Headers `
    -Method Delete

Summary

As you can see, the Kudu web jobs API makes it very straightforward, to deploy, run, query and update your webjobs. This makes it a convenient platform for running occasional maintenance tasks. We've seen in this post how this can be easily scripted in PowerShell with the Azure CLI, but you can of course use your preferred shell and language to call the same APIs.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.