0 Comments Posted in:

It's a very common practice for .NET developers to use a local SQL database when running locally. Quite often that's SQL Server Express with its convenient and lightweight "LocalDB" feature. This saves you having to connect out to an external SQL server shared by other developers, or pay for a cloud hosted database.

However, if you have Docker Desktop installed on your machine (which I highly recommend), you also have the option of running your database in a container.

Why run in a container?

There are a number of benefits to the approach of running your databases in containers rather than directly on your developer machine.

First, there are no prerequisites other than Docker. You don't need SQL Server Express installed, and in fact, with Docker multi-stage builds and Visual Studio Code Remote Containers, you could even develop and run your whole application locally with nothing more than Docker installed.

Second, if you are running other containers, they can't easily connect out to a SQL Express LocalDB running on your developer machine. Running your database as another container solves this problem, as containers hosted on the same computer can easily communicate with each other.

Third, running databases as containers makes it very easy to switch out to other versions of the database by mounting different "volumes" (we'll discuss later), or to create throwaway databases for testing where you want to discard your changes after you've finished.

Fourth, even if you don't plan to use containerized databases in production (I normally use Azure SQL Database), if you have a CI/CD process that runs "integration tests" where your application is tested talking to a real database, having the ability for any Docker host to run those containers is extremely convenient as now the server running those tests only needs to have Docker installed.

Fifth, the containerized approach to application dependencies is extremely beneficial to microservices development, and once you've got started using it for one dependency such as a database, you'll quickly want to use it for all your dependencies such as Redis, RabbitMQ, Elasticsearch etc.

Let's see how to get set up

Step 1 - Running SQL Server in a Container

You'll need Docker Desktop installed and be running in the Linux mode to follow these instructions.

We'll use the docker run command, with -d to run in the background, --name to give our container a meaningful name, -p to expose port 1433, and setting two environment variables with -e to set the SA password and accept the EULA. The image we'll run is mcr.microsoft.com/mssql/server:2019-latest

docker run -d -e "ACCEPT_EULA=Y" -e "[email protected]" -p 1433:1433 --name customerdb mcr.microsoft.com/mssql/server:2019-latest

With that done, we will have a SQL Server running locally as a container. You can stop it with docker stop customerdb or completely delete the whole thing with docker rm -f customerdb.

Step 2 - Connecting to the database

With SQL Express, your connection strings look something like this:


Our Docker SQL container is visible on localhost, so our connection string will look like this (I've not included Database=CustomerDB; because we haven't created that database yet).

Server=localhost;User Id=sa;[email protected];

Here's some example code using Dapper that can connect to our server, create a database, create a table, add a record and query it:

// n.b. you will need Dapper and System.Data.SqlClient NuGet packages to run this
var connectionString = "Server=localhost;User Id=sa;[email protected];";
var createDBSQL = @"
IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = 'CustomerDB')
        CREATE DATABASE CustomerDB
var createTableSQL = @"
        CREATE TABLE [dbo].[Customers] (
            [CustomerID] INT            IDENTITY (1, 1) NOT NULL,
            [FirstName]  NVARCHAR (MAX) NULL,
            [LastName]   NVARCHAR (MAX) NULL,
            [Email]      NVARCHAR (MAX) NULL,
            CONSTRAINT [PK_dbo.Customers] PRIMARY KEY CLUSTERED ([CustomerID] ASC)

var insertSQL = "INSERT INTO Customers (FirstName, LastName, Email) Values ('Mark', 'Heath', '[email protected]');";

using (var db = new SqlConnection(connectionString))
    Console.WriteLine("Created DB");
    db.Execute("USE CustomerDB");
    Console.WriteLine("Created Table");
    var affectedRows = db.Execute(insertSQL);
    Console.WriteLine("Added user");
    var customers = db.Query<Customer>("SELECT * From Customers").ToList();
    foreach(var c in customers)
        Console.WriteLine($"{c.CustomerID} {c.FirstName} {c.LastName}");

public class Customer
    public int CustomerID { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Email { get; set; }

Step 3 - Using volumes

Now, if we stop our database with docker stop customerdb and then restart it with docker start customerdb, the table and data in the database will be retained. But if we remove the container with docker rm -f customerdb then any data is lost. What if we want to keep our data, but not the container?

We can do that with volumes. Let's add one more parameter to our docker run command. This is -v customervol:/var/opt/mssql, which will "mount" a "volume" to our container. This volume is called customervol and will be created automatically if it doesn't exist. We'll then mount that to the location /var/opt/mssql inside our container. This is where SQL Server writes our data, so this will put that data into the volume.

docker run -d -e "ACCEPT_EULA=Y" -e "[email protected]" -p 1433:1433 --name customerdb -v customervol:/var/opt/mssql mcr.microsoft.com/mssql/server:2019-latest

Now we can connect and run the same times as before, but this time if we remove the container with docker rm -f customerdb our volume will still exist. We can see the volumes we have with docker volume ls and delete the volume with docker volume rm customerdb if we want to get rid of it.

The great thing about volumes is that we can now start a brand new container, and mount the same volume. All of the contents of our database will be there. Not only that, but we could have several volumes representing different versions of the database, or databases pre-seeded with certain starting data, and easily switch between them.

Vote on HN