0 Comments

Let me start by wishing you all a happy new year. It's that time once again (previous years: 2018, 2017, 2016, 2015, 2013 where I post a few reflections on the past year, and think about what's ahead.

Travel and conferences

2018 was memorable for speaking at my first ever conferences, and visiting the USA for the first time. And in 2019 I was able to do both again. I spoke on Azure Durable Functions at Ignite the Tour London in February, and on Technical Debt at Techorama Netherlands in October. I also went to my second Microsoft MVP Summit in March, and ended up making a second visit to Microsoft in Seattle a couple of months later for some business meetings related to my day job at NICE.

Another highlight was to attend the first ever Pluralsight Author Summit in Europe in May. Despite having been an author for them for many years, I had never been to a Pluralsight event, so it was amazing to meet lots of the staff and other authors for the first time.

Pluralsight courses

A lot of my effort this year went into the release of four Pluralsight courses. I started by completely re-recording my Azure Functions Fundamentals. This has been my most popular Pluralsight course by a long way, so I wanted to make sure it was as up to date as possible. I completed that in April, and then in July I released an update to my Microsoft Azure Developer: Deploying and Managing Containers course, which included a new module on container security. This course is part of a partnership with Microsoft Learn which means you can watch it for free even if you're not a Pluralsight subscriber.

Then later in the year, I worked on two courses for a new Microservices Architecture learning path at Pluralsight. First was Microservices Fundamentals released at the end of October, followed by Building Microservices completed in December. They kept me extremely busy over the past four months, so apologies for the reduced blogging output during that period.

Azure

Azure remains a huge focus for me at the moment. My day job as a cloud architect at NICE has given me a constant stream of interesting challenges and new technologies and practices to learn. There are a lot of exciting developments in the world of cloud native computing at the moment, but it can sometimes seem a bit overwhelming trying to keep up with it all.

Sadly, the continued focus on Azure meant that there was very little time to work on any audio related code, although I was able to release NAudio 1.9 back in May.

Plans for 2020

My calendar for the start of 2020 is already starting to fill up, as I've agreed to give two talks at Ignite the Tour London later this month, and I'll be at NDC London helping out on the Microsoft stand. I'd love to meet any of you who are at those events.

In March I'll be back in Seattle for the MVP Summit, and then attending the Pluralsight author summit in London. I'm also going to be producing updates to at least two of my Azure Pluralsight courses to keep things up to date.

I expect a lot of my content this year will continue to be focused on Azure. I'm currently doing a lot of prototyping with Cosmos DB and AKS, and I'm keeping a close eye on dapr.

But I also hope to keep doing bits and pieces of audio related programming, as well as trying to find more time for making music, which is one of my favourite hobbies. I've reorganized my home office this Christmas to keep my instruments close at hand, allowing me to mix a bit of fun in with work.

my desk

Finally, a huge thank you to everyone who has watched my courses, attended my talks and read my blog. I especially appreciate all the feedback you've given me. It's great to know that the content I am producing is proving helpful for people.


0 Comments Posted in:

I'm really pleased to announce that my latest Pluralsight course, Building Microservices is now available. This follows on from my Microservices Fundamentals course, and is part of the a microservices architecture learning path at Pluralsight.

The course focuses on three aspects of building microservices in particular.

First, how to structure domain logic. One of the nice things about microservices is that you are free to use different architectural styles and data access patterns in each microservice, giving you the freedom to use simple patterns where that makes sense, and more advanced techniques like CQRS or event sourcing where they would bring most benefit.

The domain logic pattern names I was asked to use for this course come from the classic book "Patterns of Enterprise Architecture" by Martin Fowler, which predates the microservices by many years, but it was interesting to me to see how these approaches are still applicable and relevant despite the architectural landscape having changed a lot.

Second, how to test microservices. As you'd expect I emphasise the importance of unit testing and test driven development, but I don't think they are the whole story. With a microservices architecture, you need a broad testing strategy, and so I use the concept of the test pyramid to look at the role that integration (or "service-level") and end-to-end tests play in the bigger picture.

Finally, how to authenticate and authorize microservices with each other. This is of course an extremely important topic, but also very challenging to cover in a relatively short course like this. Because I'm using the eShopOnContainers sample application again in this course, I show how its approach of using OAuth and OpenID Connect along with Identity Server is a really great choice for securing microservices.

It's impossible not to feel a little bit of "imposter syndrome" when teaching a course on a topic as broad-ranging as microservices. There are many different viable approaches to implementing a microservices architecture, and I only know about a few of them. This course isn't intended to be the final word on how to build microservices, but my hope is that what I've learned so far will be helpful for others starting out on their microservices journey.

If you do watch the course, I'd love to hear your feedback, and what techniques and patterns you're finding most helpful in implementing your own microservices architectures.

Finally, if any of you are UK based, I'll be speaking at the Ignite the Tour conference in London in January 2020, looking in particular at containerized and serverless architectures and how to host them in Azure.


0 Comments Posted in:

With Docker Desktop, developers using Windows 10 can not only run Windows containers, but also Linux containers.

Windows and Linux container modes

The way this works is that Docker for Desktop has two modes that you can switch between: Windows containers, and Linux containers. To switch, you use the right-click context menu in the system tray:

switch mode

When you're in Linux containers mode, behind the scens, your Linux containers are running in a Linux VM. However, that is set to change in the future thanks to Linux Containers on Windows (LCOW), which is currently an "experimental" feature of Docker Desktop. And the upcoming Windows Subsystem for Linux 2 (WSL2) also promises to make this even better.

But in this article, I'm using Docker Desktop without these experimental features enabled, so I'll need to switch between Windows and Linux modes.

Mixed container types

What happens if you have a microservices application that needs to use a mixture of Windows and Linux containers? This is often necessary when you have legacy services that can only run on Windows, but at the same time you want to benefit from the smaller size and lower resource requirements of Linux containers when creating new services.

We're starting to see much better support for mixing Windows and Linux containers in the cloud. For example, Azure Kubernetes Service AKS, allows multiple node pools allowing you to add a Windows Server node pool to your cluster.

But if you create an application using a mix of Windows and Linux container types, is it possible to run it locally with Docker Desktop?

The answer is, yes you can. When you switch modes in Docker for Desktop, any running containers continue to run. So it's quite possible to have both Windows and Linux containers running locally simultaneously.

Testing it out

To test this out, I created a very simple ASP.NET Core web application. This makes it easy for me to build both Linux and Windows versions of the same application. The web application displays a message showing what operating system the container is running in, and then makes a request to an API on the other container, allowing me to prove that both the Linux and Windows containers are able to talk to each other.

I created the app with dotnet new webapp, which uses Razor pages, and added a simple Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.csproj .
RUN dotnet restore

# copy everything else and build app
COPY . .
RUN dotnet publish -c Release -o /out/

FROM mcr.microsoft.com/dotnet/core/aspnet:3.0 AS runtime
WORKDIR /app
COPY --from=build /out .
ENTRYPOINT ["dotnet", "cross-plat-docker.dll"]

In the main index.cshtml razor page view, I display a simple message to show the OS version and the message received from the other container.

    <h1 class="display-4">Welcome from @System.Environment.OSVersion.VersionString</h1>
    <p>From @ViewData["Url"]: @ViewData["Message"]</p>

In the code behind, we get the URL to fetch from config, and then call it, adding its response to the ViewData dictionary.

public async Task OnGet()
{
    var client = _httpClientFactory.CreateClient();
    var url = _config.GetValue("FetchUrl","https://markheath.net/");
    ViewData["Url"] = url;
    try
    {
        var message = await client.GetStringAsync(url);
        ViewData["Message"] = message.Length > 4000 ? message.Substring(0, 4000) : message;
    }
    catch (Exception e)
    {
        _logger.LogError(e, $"couldn't download {url}");
        ViewData["Message"] = e.Message;
    }
}

This page also has an additional GET endpoint for the other container to call. This uses a routing feature of ASP.NET Core web pages called named handler methods that was new to me. If we create a method on our Razor page called OnGetXYZ, then if we call the page route with the query string ?Handler=XYZ it will get handled by this method, instead of the regular OnGet method.

This allowed me to return some simple JSON.

public async Task<IActionResult> OnGetData()
{
    return new JsonResult(new[] { "Hello world", 
    Environment.OSVersion.VersionString });
}

I've put the whole project up on GitHub if you want to see the code.

Building the containers

To build the Linux container, switch Docker Desktop into Linux mode (you can check it's completed the switch by running docker version), and issue the following command from the folder containing the Dockerfile.

docker image build -t crossplat:linux .

And then to build the Windows container, switch Docker into Windows mode, and issue this command:

docker image build -t crossplat:win .

Running the containers

To run the contains, we need to use docker run, and expose a port. I'm setting up the app in the container to listen on port 80, and exposing it as port 57000 for the Windows container and 32770 for the Linux container.

But I'm also using an environment variable to tell each container where to find the other. This raises the question of what IP address should the Linux and Windows containers use in order to communicate with each other.

I tried a few different approaches. localhost doesn't work, and if you try using one of the IP addresses of your machine (as listed by ipconfig) you might be able to find one that works. However, I chose to go for 10.0.75.1. This is a special IP address used by Docker Desktop. This worked for me with both the Windows and Linux containers able to contact each other, but I don't know whether this is the best choice.

With Docker Desktop in Linux mode, I ran the following command to start the Linux container, listening on port 32770 and attempting to fetch data from the Windows container:

docker run -p 32770:80 -d -e ASPNETCORE_URLS="http://+80" `
    -e FetchUrl="http://10.0.75.1:57000/?handler=data" crossplat:linux

And with Docker Desktop in Windows mode, I ran the following command to listen on port 57000 and attempting to fetch data from the Linux container.

docker run -p 57000:80 -d -e ASPNETCORE_URLS="http://+80" `
    -e FetchUrl="http://10.0.75.1:32770/?handler=data" crossplat:win

Results

Here's the Linux container successfully calling the Windows container: image

And here's the Windows Container successfully calling the Linux container: image

In this post, we've demonstrated that it's quite possible to simultaneously run Windows and Linux containers on Docker Desktop, and for them to communicate with each other.

Apart from the slightly clunky mode switching that's required, it was easy to do (and that mode switching could well go away in the future thanks to LCOW and WSL2).

What this means is that it's very easy for teams that need to work on a mixture of container types to do so locally, as well as in the cloud.