0 Comments Posted in:

I've provided a few tutorials on this blog showing how easy Docker makes it to get a WordPress blog up and running. These show off two different ways of running containers in Azure - first just by using a regular Virtual Machine, and second with Azure Container Instances:

Web App for Containers

But Azure offers several other ways to host your containers, and for WordPress, a great choice would be to use Web App for Containers and Azure Database for MySQL for the database.

"Web App for Containers" is simply a way of hosting your web application on App Service as a container (Linux or Windows). The advantage of doing this is that App Service offers many features ideally suited to web applications such as configuring custom domains and SSL certificates, slot swapping, CI/CD functionality, auto-scaling, IP address whitelisting, AD authentication and much more.

And the reason for using "Azure Database for MySQL" rather than also using a container for the database is that we might want to scale up the web server to multiple instances, but we'd want each of the containers to be talking to the same database.

So let's see how we can use the Azure CLI to set up WordPress running on Web App for Containers, using Azure Database for MySQL as the back-end.

Create the App Service Plan

I'll be showing PowerShell commands, but since I'm using the cross-platform Azure CLI, these commands can also be run with minimal modification in a Bash shell.

When we create the app service plan, we will need to specify the --is-linux flag as we plan to use a Linux container image.

# create a resource group to hold everything in this demo
$resourceGroup = "wordpressappservice"
$location = "westeurope"
az group create -l $location -n $resourceGroup

# create an app service plan to host our web app
$planName="wordpressappservice"
az appservice plan create -n $planName -g $resourceGroup `
                          -l $location --is-linux --sku S1

Create an Azure Database for MySQL

We then need to create a MySQL server with the az mysql server create command, and we will need to set the --ssl-enforcement flag to Disabled for this demo to work.

# we need a unique name for the servwer
$mysqlServerName = "mysql-xyz123"
$adminUser = "wpadmin"
$adminPassword = "J9!3EklqIl1-LS,am3f"

az mysql server create -g $resourceGroup -n $mysqlServerName `
            --admin-user $adminUser --admin-password "$adminPassword" `
            -l $location `
            --ssl-enforcement Disabled `
            --sku-name GP_Gen4_2 --version 5.7

n.b. if the az mysql server create command takes a long time to return, I've found I need to cancel it and try it again.

And we will also need to open up a firewall rule to allow our web app to talk to the MySQL server. The simplest approach is to use the special 0.0.0.0 address to allow all internal Azure traffic, but a better solution is to get the outbound IP addresses of our Web App and explicitly create a rule for each one.

# open the firewall (use 0.0.0.0 to allow all Azure traffic for now)
az mysql server firewall-rule create -g $resourceGroup `
    --server $mysqlServerName --name AllowAppService `
    --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

Create a Web App from a Container

Now let's create a new web app. We need to give it a unique name, and we'll use the official WordPress image from Docker Hub

$appName="wordpress-1247"
az webapp create -n $appName -g $resourceGroup `
                 --plan $planName -i "wordpress"

Although this will start up our container, we're not actually ready yet as we need some environment variables to be correctly configured. Annoyingly, the az webapp create command doesn't allow us to do that at the time of creating the app (please add your support this GitHub issue which also highlights that you can't specify a private ACR image with this command either).

Configure the Environment Variables

Configuring environment variables for our container is done by setting the web apps "Application Settings", which will be surfaced as environment variables within the container. The WordPress container image is expecting three environment variables for the database host name, username and password

# get hold of the wordpress DB host name
$wordpressDbHost = (az mysql server show -g $resourceGroup -n $mysqlServerName `
                   --query "fullyQualifiedDomainName" -o tsv)

# configure web app settings (container environment variables)
az webapp config appsettings set `
    -n $appName -g $resourceGroup --settings `
    WORDPRESS_DB_HOST=$wordpressDbHost `
    WORDPRESS_DB_USER="[email protected]$mysqlServerName" `
    WORDPRESS_DB_PASSWORD="$adminPassword"

Once we've set this, presumably the container gets restarted to make those environment variables available. In any case, I've found that once I've made these setting changes, after a couple of minutes, the WordPress site is up and running.

Test it out

We can find out the domain name of our WordPress site with the az webapp show command like this:

$site = az webapp show -n $appName -g $resourceGroup `
                       --query "defaultHostName" -o tsv
Start-Process https://$site

And you should see that now you have a fully working WordPress installation that you can set up and try out.

WordPress installation

The setup wizard only takes a minute, and you'll be editing new posts in no time:

Editing a Post

Scale out

Because we've used Azure Database for MySQL for our database rather than a containerized instance of MySQL, our web apps are entirely stateless. That means we can safely scale out to multiple instances of our web server, which is easily achieved with the az appservice plan update command. Notice that you scale out the App Service plan as a whole, rather than at the web app level.

az appservice plan update -n $planName -g $resourceGroup --number-of-workers 3

Clean up

Of course, when you're done with this instance of WordPress, you'll want to clean up the resources you created. Since we put everything (the App Service Plan, the Web App and the MySQL Server) in the same resource group, we can clean it all up with a single command like this:

az group delete --name $resourceGroup --yes --no-wait

Summary

Azure Web App for Containers is an ideal hosting platform for containerized web applications like WordPress. You benefit from many added value web hosting features that App Service has to offer, as well as the cost benefits of being able to host multiple containerized web apps on the same App Service plan.

Want to learn more about how easy it is to get up and running with containers on Azure? Be sure to check out my Pluralsight courses Microsoft Azure Developer: Deploy and Manage Containers

0 Comments Posted in:

This is the third part of a series looking at how easy Docker makes it to explore and experiment with open source software. The previous two parts are available here

Today we're going to look at Elasticsearch, and this will give us the chance to see some of the capabilities of Docker Compose.

To follow along with the commands in this tutorial I recommend that you use Play with Docker which allows you to run all these commands in the browser.

Start a new container running Elasticsearch

If you just want to try out Elasticsearch running in a single node, then we can do that with the docker run command shown below.

We're exposing ports 9200 (for the REST API), and setting up a single node cluster (using an environment variable), from the official elasticsearch 6.4.2 image. I'm also showing how to set up a volume to store the index data in.

docker run -d -p 9200:9200 -e "discovery.type=single-node" \
-v esdata:/usr/share/elasticsearch/data \
docker.elastic.co/elasticsearch/elasticsearch:6.4.2

And all the Elasticsearch commands we run with curl will work just fine on this single container. But for this tutorial, I'm going to use a cluster created with docker-compose instead.

Use Docker Compose to create an Elasticsearch cluster

With docker-compose we can declare all the containers that make up an application in a YAML format. For each container we can also configure the environment variables that should be set, any volumes that are required, and define a network to allow the services to communicate with each other.

Here's the first version of our docker-compose.yml file. It defines a simple two-node cluster, and each node in the cluster has a volume so that our indexes can live independently of our containers, and survive upgrades (which we'll be doing later). Notice that we're using the version of elasticsearch tagged 6.4.1.

version: '2.2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.1
    container_name: elasticsearch2
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata2:/usr/share/elasticsearch/data
    networks:
      - esnet

volumes:
  esdata1:
    driver: local
  esdata2:
    driver: local

networks:
  esnet:

To download this file locally as docker-compose-v1.yml you can use the following command:

curl https://gist.githubusercontent.com/markheath/f246ec3aa5a3e7493991904e241a416a/raw/c4fa64575bc854e34a2506291bd14033caf5e9b6/docker-compose-v1.yml > docker-compose-v1.yml

And now we can use the docker-compose up command to start up the containers, and create all necessary resources like networks and volumes. We're using -d to run in the background just like we can with docker run

docker-compose -f docker-compose-v1.yml up -d

Check cluster health

We've exposed port 9200 on one of those containers, allowing us to query the cluster health with the following request:

curl http://localhost:9200/_cluster/health?pretty

Create an index

Now let's create an index called customer

curl -X PUT "localhost:9200/customer?pretty"

Add a new document

And let's add a document to that index:

curl -X PUT "localhost:9200/customer/_doc/1?pretty" \
-H 'Content-Type: application/json' -d'{"name": "Mark Heath" }'

By the way, if you're following along with PowerShell instead of bash you can use Invoke-RestMethod to accomplish the same thing.

Invoke-RestMethod -Method Put `
-Uri "http://localhost:9200/customer/_doc/1?pretty" `
-ContentType "application/json" -Body @'{"name": "Mark Heath" }'@

View documents in the index

There are lots of ways to query elasticsearch indexes and I recommend you check out the Elasticsearch 6.4 Getting Started Guide for more details. However, we can easily retrieve the documents in our existing customer index with:

curl localhost:9200/customer/_search?pretty

Upgrade the cluster to 6.4.2

Suppose we now want to upgrade the nodes in our cluster from Elasticsearch 6.4.2 (we were previously running 6.4.1). What we can do is update our YAML file with new container version numbers.

I have an updated YAML file available here, which you can download to use locally with

curl https://gist.githubusercontent.com/markheath/f246ec3aa5a3e7493991904e241a416a/raw/c4fa64575bc854e34a2506291bd14033caf5e9b6/docker-compose-v2.yml > docker-compose-v2.yml

Before we upgrade our cluster, take a look at the container ids that are currently running with docker ps. These containers are not going to be "upgraded" - they're going to be disposed, and new containers running 6.4.2 will be created. However, the data is safe, because it's stored in the volumes. The volumes won't be deleted, and will be attached to the new containers.

To perform the upgrade we can use the following command.

docker-compose -f docker-compose-v2.yml up -d

We should see it saying "recreating elasticsearch" and "recreating elasticsearch2" as it discards the old containers and creates new ones.

Now if we run docker ps again we'll see new container ids and new image versions.

Check our index is still present

To ensure that our index is still present we can search again and check our document is still present.

curl localhost:9200/customer/_search?pretty

Let's add another document into the index with:

curl -X PUT "localhost:9200/customer/_doc/2?pretty" -H 'Content-Type: application/json' -d'{"name": "Steph Heath"}'

Upgrade to a three node cluster

OK, let's take this to the next level. I've created a third version of my docker-compose YAML file that defines a third container, with its own volume. The YAML file is available here.

Something important to note is that I needed to set the discovery.zen.minimum_master_nodes=2 environment variable to avoid split brain problems.

You can download my example file with:

curl https://gist.githubusercontent.com/markheath/f246ec3aa5a3e7493991904e241a416a/raw/a2685d1bf0414acbc684572d00cd7c7c531d0496/docker-compose-v3.yml > docker-compose-v3.yml

And then we can upgrade our cluster from two to three nodes with

docker-compose -f docker-compose-v3.yml up -d

The change of environment variable means that we will recreate both elasticsearch and elasticsearch2, and of course the new elasticsearch3 container and its volume will get created.

We should check the cluster status and if all went well, we'll see a cluster size of three:

curl http://localhost:9200/_cluster/health?pretty

Let's check our data is still intact by retrieving a document by id from our index

curl -X GET "localhost:9200/customer/_doc/1?pretty"

Add Kibana and head plugin

While I was preparing this tutorial I came across a really nice article by Ruan Bekker who takes this one step further by adding a couple more containers to the docker-compose file for an instance of Kibana and the Elasticsearch Head plugin.

So here's the final docker-compose.yaml file we'll be working with:

version: '2.2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - discovery.zen.minimum_master_nodes=2
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet

  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
    container_name: elasticsearch2
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - discovery.zen.minimum_master_nodes=2
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata2:/usr/share/elasticsearch/data
    networks:
      - esnet

  elasticsearch3:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
    container_name: elasticsearch3
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "discovery.zen.ping.unicast.hosts=elasticsearch"
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - discovery.zen.minimum_master_nodes=2
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata3:/usr/share/elasticsearch/data
    networks:
      - esnet

  kibana:
    image: 'docker.elastic.co/kibana/kibana:6.4.2'
    container_name: kibana
    environment:
      SERVER_NAME: kibana.local
      ELASTICSEARCH_URL: http://elasticsearch:9200
    ports:
      - '5601:5601'
    networks:
      - esnet

  headPlugin:
    image: 'mobz/elasticsearch-head:5'
    container_name: head
    ports:
      - '9100:9100'
    networks:
      - esnet

volumes:
  esdata1:
    driver: local
  esdata2:
    driver: local
  esdata3:
    driver: local

networks:
  esnet:

You can download my YAML file with

curl https://gist.githubusercontent.com/markheath/f246ec3aa5a3e7493991904e241a416a/raw/a2685d1bf0414acbc684572d00cd7c7c531d0496/docker-compose-v4.yml > docker-compose-v4.yml

And now we can update our cluster again with

docker-compose -f docker-compose-v4.yml up -d

Try out Kibana

Once we've done this, we can visit the Kibana site by browsing to localhost:5601. If you were following along in "Play with Docker" then you'll see special links appear for each port that is exposed (9200, 9100 and 5601).

Play with Docker

If you click on the 5601 link, you'll be taken to an instance of Kibana. The first step will be to define an index pattern (e.g. customers*)

Kibana index

And then if you visit the discover tab, you'll see we can use Kibana to search the documents in our index:

Kibana search

Try out Elasticsearch head plugin

You can also visit localhost:9100 (or in Play with Docker, click the 9100 link) to use the Elasticsearch head plugin. This gives you a visualization of the cluster health.

Elasticsearch head plugin

Note that if you are using Play with Docker, you'll need to copy the port 9200 link and paste it into the Connect textbox to connect the head plugin to your Elasticsearch cluster.

Clean up

To stop and delete all the containers:

docker-compose -f docker-compose-v4.yml down

And if you want to delete the volumes as well (so all index data will be lost), add the -v flag:

docker-compose -f docker-compose-v4.yml down -v

Summary

In this tutorial we saw that not only is it really easy to get an instance of Elasticsearch running with Docker that we could use for experimenting with the API, but with Docker Compose we can define collections of containers that can communicate with one another and start them all easily with docker-compose up.

When we upgrade our YAML file, Docker Compose can intelligently decide which containers need to be replaced, and which can be left as they are.


0 Comments Posted in:

This is the second part of a series looking at how easy Docker makes it to explore and experiment with open source software. Last time we looked at Redis, and that gave us the opportunity to see the docker run and docker exec commands in action.

Today we're going to look at PostgreSQL which will give us an opportunity to see Docker volumes in action.

You can follow along with the commands in this tutorial if you have Docker installed. If you're running Docker for Windows put it in Linux mode. But another great option is Play with Docker which lets us run all these commands in the browser.

Start a new container running PostgreSQL

We'll use docker run to start a new container from the official postgres image with the name postgres1 and exposing port 5432 (the PostgreSQL default). We're running detached (-d) mode (so in the background).

But we're also going to mount a volume (with -v), which will be used to store the database we create. The volume name will be postgres-data, and Docker will automatically create it (just using storage on the Docker host's local disk) if a volume with this name doesn't already exist.

PostgreSQL stores its data in /var/lib/postgresql/data, so we're mounting our volume to that path.

docker run -d -p 5432:5432 -v postgres-data:/var/lib/postgresql/data `
           --name postgres1 postgres

Once we've done this we can check it's running with

docker ps

And view the log output with

docker logs postgres1

Create a database

We'll create a database and one easy way to do that is by using docker exec to launch an interactive shell running inside our postgres1 container, which has the PostgreSQL CLI tools installed. This saves us from needing to have any tools to connect to and manage PostgreSQL databases installed locally.

docker exec -it postgres1 sh

Inside that shell we can ask it to create a new database with the name mydb.

# createdb -U postgres mydb

And then let's launch the psql utility which is a CLI tool for PostgreSQL, connected to our mydb database:

# psql -U postgres mydb

Explore the database

Now inside psql, let's run some basic commands. \l lists the databases. We'll also ask for the database version, and the current date:

mydb=# \l
mydb=# select version();
mydb=# select current_date;

Now let's do something a bit more interesting. We'll create a table:

mydb=# CREATE TABLE people (id int, name varchar(80));
CREATE TABLE

Then we'll insert a row into the table:

mydb=# INSERT INTO people (id,name) VALUES (1, 'Mark');
INSERT 0 1

And finally, check it's there

mydb=# SELECT * FROM people;
 id | name 
----+------
  1 | Mark
(1 row)

Now we can quit from psql with \q and exit from our shell

mydb=# \q 
# exit

Of course our postgres1 container is still running.

Stop and restart the container

Let's prove that we don't lose the data if we stop and restart the container.

docker stop postgres1
docker start postgres1

And rather than connect again to this container, let's test from another linked container, using the same technique for linking containers we saw in our Redis demo.

docker run -it --rm --link postgres1:pg --name client1 postgres sh

Launch psql but connect to the other container (-h) which we've given the name pg in our link configuration:

# psql -U postgres -h pg mydb

Now from this client1 container we can access data in the database stored in the postgres1 container:

mydb=# SELECT * FROM people;
 id | name 
----+------
  1 | Mark
(1 row)

Now we can quit from psql and exit from our shell, which will remove the client1 container since we specified the --rm flag to auto-delete the container when the command it was running exits.

mydb=# \q 
# exit

Inspect the volume

We can find out information about the volume that we've created with docker volume inspect, including where on our local disk the data in that volume is being stored. Here's some typical output.

$ docker volume inspect postgres-data
[
    {
        "CreatedAt": "2018-09-03T19:50:23Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/postgres-data/_data",
        "Name": "postgres-data",
        "Options": null,
        "Scope": "local"
    }
]

And if we take a look inside the local folder on our Docker host, we can see all the data that has been stored in that volume.

$ ls /var/lib/docker/volumes/postgres-data/_data/
PG_VERSION            pg_multixact          pg_tblspc
base                  pg_notify             pg_twophase
global                pg_replslot           pg_wal
pg_commit_ts          pg_serial             pg_xact
pg_dynshmem           pg_snapshots          postgresql.auto.conf
pg_hba.conf           pg_stat               postgresql.conf
pg_ident.conf         pg_stat_tmp           postmaster.opts
pg_logical            pg_subtrans           postmaster.pid

Obviously a Docker volume doesn't need to be stored on local disk on the Docker host. In a production environment like Azure, you'd most likely mount an Azure file share as a volume.

Discard the container but keep the data

Let's stop and remove the postgres1 container with a single command (-f forces it to remove a running container). Because the data is stored in a volume, that is still safe.

docker rm -f postgres1

Attach an existing volume to a new container

Let's now start up a brand new container called postgres2 but attach the existing postgres-data volume that contains our database:

docker run -d -p 5432:5432 -v postgres-data:/var/lib/postgresql/data --name postgres2 postgres

Once it starts up, let's run a psql session inside it and check that the database, table and data are still all present and correct:

docker exec -it postgres2 sh
# psql -U postgres mydb
mydb=# SELECT * FROM people;
 id | name
----+-------
  1 | Mark
(1 row)

And exit out again:

mydb=# \q
# exit

Clean up everything

And now, let's really clean up. Not only will we remove the postgres2 container, but we'll then remove the postgres-data volume. So now the contents of the database are deleted as well.

docker rm -f postgres2
docker volume rm postgres-data

Summary

As you can see, not only is it easy to use Docker to explore PostgreSQL, we can also easily configure a volume allowing the lifetime of the data to be managed independently of the lifetime of the container. If we'd wanted to, we could also have connected directly to this PostgreSQL container on port 5432 and used it for some local development.

Next up, we'll explore running Elasticsearch in a container, which will give us an opportunity to see docker-compose in action.