0 Comments Posted in:

I know I'm a bit late, but happy new year everyone. It's time for another year in review post.

Pluralsight updates

In the last year I updated six of my Pluralsight courses to keep them up to date with the latest changes. The courses I updated were Azure Durable Functions Fundamentals, Implement Azure Functions AZ-204, Create Serverless Functions, Microsoft Azure Developer: Deploy and Manage Containers, Microservices Fundamentals, and Building Microservices. Of course, Azure is a constantly changing platform, so I expect that there will be more updates to follow this year.

Travel and conferences

It was another year of mostly working from home and attending virtual conferences. This year I gave several talks on microservices at various developer groups (sadly not available online), and I also spoke at the Audio Programmer meetup on the topic of "Is .NET Any Good for Audio?".

Microservices and Azure

My day job as an architect continues to focus a lot on Azure and microservices. Last year involved a lot of thinking about how best to implement multi-tenancy in the cloud, as well as more work on modernizing some legacy codebases, and moving more in a microservices direction which has proved valuable as the development team has grown significantly recently.

I mentioned last year that I wanted to explore the capabilities of Dapr and I was able build a number of prototypes that have helped me understand what it has to offer. I have to say I really like the concept behind Dapr and hopefully I'll be able to share some of the things I've learned on this blog later in the year.

I've also been introducing Azure Durable Functions into the projects I'm working on at every opportunity. In almost all the microservices I've worked on there are workflows that greatly benefit from the Durable Functions approach to orchestration.

Moving house

Probably the biggest thing for me personally this year was moving house. We'd been in our old house for almost 20 years so it was a big upheaval to move, and meant that my time for blogging, open source and conferences was fairly limited. The great news for me is that I now finally have my own dedicated office rather than having to do all my Pluralsight recording in the bedroom.

Music and audio

For the second year in a row I've been mostly working from home which has the nice benefit of meaning that my instruments are always close at hand, meaning that I could use my lunch breaks for practicing. This year I've been trying to level up a bit on my music theory with some jazz/gospel chord progressions and extensions finally beginning to make sense!

I was able to launch a new version of NAudio this year with improved .NET Core support, but really my days of doing audio programming on .NET are over, and I'm on the lookout for anyone who'd like to take the project over. Let me know if you'd be interested.

Plans for 2022

Once again it's hard to predict what the new year will hold, but a few things that I suspect will be involved include some more Pluralsight course updates (as well as hopefully a new course - I'll keep you posted!), and more talking at user groups on topics around Microservices, Azure Functions and Dapr. Let me know if you'd like me to speak at your event and I'll see what I can do.

I've also been considering recording a short series of podcast episodes on some of the development and architectural topics. A lot of my blog ideas are on the kinds of topics that it would probably be easier to talk about than write about.

Anyway, thanks to everyone who has followed my blog, watched my courses and talks in the last year. I hope you have a happy and healthy 2022.

0 Comments Posted in:

Dapr provides a set of "building blocks" that greatly simplify microservice development. I've already made the case for why you should use Dapr for distributed applications, and in this post, I want to explore the options for running locally (particularly with .NET developers in mind).

There's actually quite a lot of choice, and so this post is simply my current understanding of the options available and why you might pick one. At the time of writing Dapr 1.5 has just been released, and I'm sure that over time there will be further improvements to simplify things.

Ideally, if I'm working on a microservices application, I want it to be really easy to run the entire application locally, as well as to test and debug the microservice I'm working on in the context of the whole application.

Choice 1 - One repo or many?

One of the first choices you run into with microservices (regardless of whether you're using Dapr) is whether to put all of your microservices into a single source code repository or have one per microservice.

The advantage of keeping all microservices in one Git repo is that you've just got one thing to clone and all your code is conveniently located in one place, making it easier to find what you're looking for. The disadvantage is that as the number of microservices grows, this repo can become unwieldy. You can also find that developers inadvertently create inappropriate tight coupling between microservices such as adding direct project references to the codebase of another microservice in Visual Studio.

Another tricky challenge is the many CI/CD tools assume that a single Git repo means a single asset to build and deploy. But with microservices you want to independently deploy and release each microservice. You may also want to tag and branch them independently in Git, which can get confusing. For that reason, a lot of teams working on microservices gravitate towards separate repos per microservice, especially as the project grows much larger.

To be honest I can't say I know what the best approach is here. It seems that the "monorepo" is making a comeback in terms of popularity, and with a few improvements in CI/CD tooling, maybe the inherrent difficulties with that approach can be overcome.

Fortunately Dapr will work with either approach, but the choice you make does have some implications for how you will start everything up for local development.

Choice 2 - Self-hosted or containers?

One of the key choices for running Dapr locally is whether you'd prefer your code to be containerized or not. Dapr supports running in "self-hosted" mode, where you simply run your microservice and the Dapr "sidecar" natively on your development machine. Any auxiliary services that implement the building blocks (such as Redis for state stores and pub sub) can also run locally on your machine, and you might independently decide to containerize them.

But you can go all-in with containers, and have your own code running in containers. Whether you choose this approach will depend on factors like how comfortable your development team are with using tools like Docker Compose or Kubernetes. They'll need to know how to debug code running inside containers. Now that Docker Desktop has become a commercial product, you may also not be able to use it without purchasing licenses for your team.

Containers choice: Docker compose or Kubernetes

If you do decide to go with running your microservices locally as containers, there are two approaches I've seen with Dapr. One is to construct a Docker Compose file that has a container for each microservice, plus a Dapr sidecar for each microservice, and any additional services such as Redis and Zipkin. The nice thing about this is that the Docker Compose file can either point at the source code for each microservice, or can reference pre-built images in a Docker registry, meaning that if you only care about working on a single microservice, you don't need to build the code for all the others.

The disadvantage of the Docker Compose method at the moment is that it requires a bit of expertise with the Docker Compose syntax to set it up properly. You need to ensure you correctly set up ports and networking so everything can talk to each other on the expected host name ("localhost" gets particularly confusing), and you will also need to correctly map your Dapr component definitions into the right place. Of course, once you've got it working for the first time, things become easier. But I did find myself taking a lot longer than I hoped to get this running when I first tried it (due mostly to silly mistakes).

Here's a snippet of a Docker Compose file I set up for a demo application I have been using to explore Dapr. It shows one microservice called "frontend" along with the definition I'm using for the Dapr sidecar.

    image: ${DOCKER_REGISTRY-}frontend
      context: .
      dockerfile: frontend/Dockerfile
      - DAPR_HTTP_PORT=3500
      - globoticket-dapr

    image: "daprio/daprd:1.5.0"
    command: [
     "-app-id", "frontend",
     "-app-port", "80",
     "-components-path", "/components",
     "-config", "/config/config.yaml"
      - "./dapr/dc-components/:/components"
      - "./dapr/dc-config/:/config"
      - frontend
    network_mode: "service:frontend"

If you'd like to see a full example of a Docker Compose file that can be used for Dapr, then this one which is part of the eShopOnDapr sample application would be a good choice.

The alternative is to just use Kubernetes for running your Dapr containers on. This has a lot of advantages. First, if you're also using Kubernetes in production, then you've minimised the difference between development and production environments which is always a good thing. Second, the Dapr CLI contains a number of helpful tools for installing Dapr onto a Kubernetes cluster and provides a dashboard. Third, if you run on Kubernetes, you can choose to use the single-node Kubernetes cluster managed by Docker Desktop, or point at a cloud hosted or shared cluster.

The main disadvantage of the Kubernetes approach again is the level of knowledge required by developers. Kubernetes is extremely powerful but can be perplexing, and it takes some time to become familiar with the format of the YAML files needed to define your deployments. Developers would need to understand how to debug code running in a Kubernetes cluster.

I'm hopeful that the Dapr tooling will improve in the future to the point that it can intelligently scaffold a Docker Compose file for you. It's possible that there is something already available that I don't know about, so let me know in the comments.

Self-hosted choice: startup scripts or sidekick?

If you choose the self-hosted route, then for every microservice you start locally, you also need to start a Dapr sidecar process. The easy way to do this is to just write a script that calls dapr run sets up the various port numbers and locations of the Dapr component definitions and configuration and then calls whatever starts up your microservice (in my case dotnet run). Then you just run this script for every microservice in your application, and attach your debugger to the process of the app you're working on.

Here's a example of a PowerShell script I have to start one of the microservices in my demo application

dapr run `
    --app-id frontend `
    --app-port 5266 `
    --dapr-http-port 3500 `
    --components-path ../dapr/components `
    dotnet run

There is however another nice option I discovered when watching the recent (and excellent) DaprCon conference. The "Dapr sidekick" project is a community-created utility that allows your application automatically launch the Dapr sidecar process on startup (plus some additional nice features such as restarting the sidecar if it goes down). This would be a particularly great option if you're using Visual Studio for development as it would simplify the task of starting up the microservices and automatically attaching the debugger. And it also would make a lot of sense if you were running "self-hosted" Dapr in production (which I think was one of the key motivations for creating Dapr sidekick).

Choice 3 - Visual Studio Code or Visual Studio

If like me you're a .NET developer, then the two main development environments you're likely to be choosing between are Visual Studio 2022 and VS Code.

Visual Studio Code has the advantage of being cross-platform, so would make sense if some or all of your team aren't using Windows. And there is a VS Code Dapr extension that comes with a bunch of helpful convenience features like scaffolding Dapr debugging tasks and components, and interacting with some of the building blocks. This makes VS Code an excellent choice for working on Dapr projects.

However, your dev team may be more familiar with Visual Studio, so I also tried developing with Dapr in Visual Studio 2022. The challenges I found for running self-hosted mode were that VS2022 doesn't seem to offer an easy way to use dapr run instead of dotnet run to start up services. As mentioned above, Dapr sidekick is a potentially good solution to this. I also tried the Docker Compose approach in VS2022. Visual Studio can automatically scaffold Dockerfiles and Docker Compose orchestration files for you which gives you a great start and simplifies your work considerably. You do unfortunately have to add in all the sidecars yourself, and make sure you get the networking right. After several failed attempts I finally got it working, so it is possible, and the advantage of this approach is that you can now just put breakpoints on any of your microservices and you'll hit them automatically.

Choice 4 - Entirely local or shared cloud resource?

The final choice I want to discuss in this post is whether you want to run all your microservices (and all the Dapr component services) locally on your development machine. There are advantages of doing so - you don't incur any cloud costs, and you have your own sandboxed environment. But as a microservices application grows larger, you may find that the overhead of running the entire thing on a single developer machine is using too much RAM.

One way of reducing the resources needed to run locally is for all your dependent services such as databases and services busses to be hosted elsewhere. If you are accessing these via Dapr building blocks, then it's a trivial configuration change to point them at cloud resources.

But you might want to go one step further and start cloud-hosting some of the microservices themselves. However, I'm not sure that the Dapr service invocation components have particularly strong support for a hybrid mode yet (where some microservices run locally and others elsewhere), so it might make more sense to use a cloud-hosted Kubernetes cluster to run the whole thing, and then debug into that. One interesting option is to make use of "Bridge to Kubernetes", which allows you to run your microservice locally but all the other microservices in Kubernetes, and automatically handles the correct routing of traffic between them. Check out this demo from Jessica Deen to see this in action with Dapr and Visual Studio Code.

Other options

There are a few other possible options worth exploring. One is project Tye which is a very promising proof-of-concept project that is particularly good at simplifying starting up many microservices. I think it could work well with Dapr (and there is a sample showing Tye integrated with Dapr), but Tye is still considered "experimental" at the moment. Hopefully it will continue to develop, or the good ideas from Tye can be incorporated into other tools.

The second is a new Azure service, Azure Container Apps, which is currently in preview. It is a very interesting service that simplifies hosting containerized microservices and offers a serverless billing model. Under the hood it uses Kubernetes, but the complexity is abstracted away from you. And it comes with built-in support for Dapr - you just specify that you want to enable Dapr and the sidecars will automatically be injected. I'm quite excited by this service, and assuming its not too hard to debug into it, could be a great option for development as well as production.


One gotcha I ran into with running Dapr locally, is that the built-in service discovery mechanism can conflict with VPNs and other security software in corporate environments. There's an open issue on the Dapr GitHub project to offer a simple way of working round this problem (currently you need to use Consul).


I really like what Dapr has to offer in terms of simplifying microservice development, but if you want to use it you will need to take some time to decide which mode of local development works best for you and your team. Are you using Dapr for a microservices project? I'd be really interested to hear what choices you've made for running locally.

0 Comments Posted in:

Most developers know the rule that you shouldn't check secrets such as database connection strings or API keys into code. And to be fair, it's been a while since I've seen a production secret stored in source code. But often when developers are writing dev/test utilities, they can be tempted to relax the rules. For example, in production we might fetch a connection string from an environment variable, but when developing locally, we use a hard-coded fallback value that points at a shared cloud-hosted resource.

In this post I want to highlight a variety of simple tools and techniques .NET and Azure developers can use to completely eliminate secrets from source code.

Use command line parameters

First off, a nice and simple one. If you're writing a console application, allow secrets to be passed in as command line parameters. Give a nice error message if the secret isn't passed in. And to make life even easier, consider writing a PowerShell script that fetches the secret (more on how to do that later) and passes it in for you.

Use environment variables

Environment variables are arguably an even better way to get secrets into your code. They can be used with automated testing frameworks like NUnit, and are the default way of making secrets available in a variety of environments including containers and Azure App Service. By using environment variables for your secrets in development, you're also minimising the difference between how your dev and production environments work which is always a good thing.

Use .NET user secrets

.NET comes with a nice capability called "user secrets", which is intended for helping you manage development secrets in ASP.NET Core, but is not limited to ASP.NET Core. You initialize it on a project with a call to:

dotnet user-secrets init

This updates your .csproj file with a UserSecretsId GUID. Then you can store a secret from the command line like this

dotnet user-secrets set "MyApp:MySecret" "abc123"

Visual Studio has built-in tooling to simplify working with user secrets (right-click the project in Solution Explorer, and select "Manage User Secrets").

In code, by default when you're in a development environment, ASP.NET Core will automatically add the user secrets for your application to IConfiguration, making them trivial to access.

If you want to use user secrets in an application that doesn't make use of HostBuilder, you can just reference the Microsoft.Extensions.Configuration.UserSecrets NuGet package and create an IConfiguration object to access them. Here's a very simple example:

var config = new ConfigurationBuilder()
var userSecret = config["AppSecrets:MySecret"];

Configuration JSON file

Another option that you can use is to have a configuration JSON file that you enter your secrets into, but you don't check into source control (using a .gitignore file to exclude it).

Azure Functions takes this approach with its local.settings.json file. This works OK, but it does require anyone cloning your repo to manually set up their own local.settings.json file, so unless you're using Azure Functions, I would generally avoid this approach.

Good README and error messages

Of course, one of the reasons that we tend to hard-code secrets is that we want the start-up experience for a new developer to be as simple as possible. We want them to clone the code and get going straight away.

If your app needs secrets to be configured before it can run, make sure you include good instructions in the README to explain how to fetch the values and put them in the right place for the application to access. And provide good error messages that tell you what's wrong if you failed to set the secrets up correctly.

Even better, make developers lives as easy as possible by automating the fetching of secrets...

Fetch secrets with the Azure CLI

If you're needing to fetch secrets like Azure Service Bus or Azure Storage connection strings, then my favourite way to do so is with the Azure CLI. It's usually pretty easy to query resources for the secrets you need. Here's an example from my post on managing blob storage that fetches a Storage Account connection string.

$connectionString=az storage account show-connection-string -n $storageAccount -g $resourceGroup --query connectionString -o tsv

Once you've retrieved it, you can easily set it as an environment variable, or pass it as a command line parameter to your application.

Use Azure Key Vault

Of course, not all the secrets you need can be fetched directly with something like the Azure CLI. For example, maybe you have an API key for a service like SendGrid, or a password for an admin account on a VM. In that case, I'd recommend storing it in Azure Key Vault, but you can use any similar secret store.

Again the Azure CLI makes it really easy to retrieve secret values from the Key Vault:

$mysecret = az keyvault secret show --vault-name mykeyvault --name mysecret --query value -o tsv

Use managed identities

Of course, even better than keeping secret connection strings out of code is to have the connection string not contain secrets at all. And that's what managed identities allow us to do. For example managed identities let you connect to an Azure SQL Database using Active Directory authentication. Your connection string doesn't need to include a password, and therefore is no longer technically a "secret":

Server=my-sql-server.database.windows.net,1433;Database=my-database;Authentication=Active Directory Default

It's a little bit of extra work to set this up and grant the correct AD identities permission to access the resource, but again it's something you can automate, so once you've done it once, it's easy to do in the future.

Many of the new Azure SDKs support this mode of connecting, and for local development you can use DefaultAzureCredential which uses a variety of techniques to get hold of your identity including using Azure CLI if you've logged in with az login. Find out more about how it works here (and here's an article I wrote showing this technique in action)

Auto-generate and rotate passwords

Of course, if you have taken the trouble to follow the advice I've given and you automate the lookup of secrets and passwords, then there's no reason for them to be reused "well-known" values. Reusing secrets is something I've unfortunately seen too often in development teams where everything has the same password to make life easier. Once you've automated the process of fetching the password, you are free to use randomly generated strong passwords for everything, and rotate them freely, knowing developers will automatically pick up the latest version next time they run.

Keep hard-coded secrets out of build and deploy pipelines

I've been focusing in this post on the local development environment, but another place hard-coded secrets can sneak in is into build and deploy pipelines as they often need to deal with connections to various online resources to store or retrieve assets like NuGet packages or container images. Whether you're using TeamCity or Azure Pipelines or GitHub Actions, all of these provide a way for you to securely enter secrets that can be made available to the build scripts.

Bonus - LINQPad Password Manager

As a bonus extra, I'm a big fan of LINQPad, which is a great tool for creating simple experimental scripts. Often in a LINQPad script you are connecting out to an external resource, and so again there is a big temptation to just hard-code a password or secret. But there's no need. LINQPad has a "password manager" that can securely store your secrets for you. In the script, just call Util.GetPassword("MySecretName"). This will return the stored secret with that name, or prompt you to provide one if its not available


The temptation to hard-code a secret is great, but there are plenty of good alternatives available to you. There really is no excuse to check secrets into source control anymore. Did I miss any useful techniques? Let me know in the comments.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.