0 Comments Posted in:

Dapr provides a set of "building blocks" that greatly simplify microservice development. I've already made the case for why you should use Dapr for distributed applications, and in this post, I want to explore the options for running locally (particularly with .NET developers in mind).

There's actually quite a lot of choice, and so this post is simply my current understanding of the options available and why you might pick one. At the time of writing Dapr 1.5 has just been released, and I'm sure that over time there will be further improvements to simplify things.

Ideally, if I'm working on a microservices application, I want it to be really easy to run the entire application locally, as well as to test and debug the microservice I'm working on in the context of the whole application.

Choice 1 - One repo or many?

One of the first choices you run into with microservices (regardless of whether you're using Dapr) is whether to put all of your microservices into a single source code repository or have one per microservice.

The advantage of keeping all microservices in one Git repo is that you've just got one thing to clone and all your code is conveniently located in one place, making it easier to find what you're looking for. The disadvantage is that as the number of microservices grows, this repo can become unwieldy. You can also find that developers inadvertently create inappropriate tight coupling between microservices such as adding direct project references to the codebase of another microservice in Visual Studio.

Another tricky challenge is the many CI/CD tools assume that a single Git repo means a single asset to build and deploy. But with microservices you want to independently deploy and release each microservice. You may also want to tag and branch them independently in Git, which can get confusing. For that reason, a lot of teams working on microservices gravitate towards separate repos per microservice, especially as the project grows much larger.

To be honest I can't say I know what the best approach is here. It seems that the "monorepo" is making a comeback in terms of popularity, and with a few improvements in CI/CD tooling, maybe the inherrent difficulties with that approach can be overcome.

Fortunately Dapr will work with either approach, but the choice you make does have some implications for how you will start everything up for local development.

Choice 2 - Self-hosted or containers?

One of the key choices for running Dapr locally is whether you'd prefer your code to be containerized or not. Dapr supports running in "self-hosted" mode, where you simply run your microservice and the Dapr "sidecar" natively on your development machine. Any auxiliary services that implement the building blocks (such as Redis for state stores and pub sub) can also run locally on your machine, and you might independently decide to containerize them.

But you can go all-in with containers, and have your own code running in containers. Whether you choose this approach will depend on factors like how comfortable your development team are with using tools like Docker Compose or Kubernetes. They'll need to know how to debug code running inside containers. Now that Docker Desktop has become a commercial product, you may also not be able to use it without purchasing licenses for your team.

Containers choice: Docker compose or Kubernetes

If you do decide to go with running your microservices locally as containers, there are two approaches I've seen with Dapr. One is to construct a Docker Compose file that has a container for each microservice, plus a Dapr sidecar for each microservice, and any additional services such as Redis and Zipkin. The nice thing about this is that the Docker Compose file can either point at the source code for each microservice, or can reference pre-built images in a Docker registry, meaning that if you only care about working on a single microservice, you don't need to build the code for all the others.

The disadvantage of the Docker Compose method at the moment is that it requires a bit of expertise with the Docker Compose syntax to set it up properly. You need to ensure you correctly set up ports and networking so everything can talk to each other on the expected host name ("localhost" gets particularly confusing), and you will also need to correctly map your Dapr component definitions into the right place. Of course, once you've got it working for the first time, things become easier. But I did find myself taking a lot longer than I hoped to get this running when I first tried it (due mostly to silly mistakes).

Here's a snippet of a Docker Compose file I set up for a demo application I have been using to explore Dapr. It shows one microservice called "frontend" along with the definition I'm using for the Dapr sidecar.

  frontend:
    image: ${DOCKER_REGISTRY-}frontend
    build:
      context: .
      dockerfile: frontend/Dockerfile
    environment:
      - DAPR_HTTP_PORT=3500
    networks:
      - globoticket-dapr

  frontend-dapr:
    image: "daprio/daprd:1.5.0"
    command: [
      "./daprd",
     "-app-id", "frontend",
     "-app-port", "80",
     "-components-path", "/components",
     "-config", "/config/config.yaml"
     ]
    volumes:
      - "./dapr/dc-components/:/components"
      - "./dapr/dc-config/:/config"
    depends_on:
      - frontend
    network_mode: "service:frontend"

If you'd like to see a full example of a Docker Compose file that can be used for Dapr, then this one which is part of the eShopOnDapr sample application would be a good choice.

The alternative is to just use Kubernetes for running your Dapr containers on. This has a lot of advantages. First, if you're also using Kubernetes in production, then you've minimised the difference between development and production environments which is always a good thing. Second, the Dapr CLI contains a number of helpful tools for installing Dapr onto a Kubernetes cluster and provides a dashboard. Third, if you run on Kubernetes, you can choose to use the single-node Kubernetes cluster managed by Docker Desktop, or point at a cloud hosted or shared cluster.

The main disadvantage of the Kubernetes approach again is the level of knowledge required by developers. Kubernetes is extremely powerful but can be perplexing, and it takes some time to become familiar with the format of the YAML files needed to define your deployments. Developers would need to understand how to debug code running in a Kubernetes cluster.

I'm hopeful that the Dapr tooling will improve in the future to the point that it can intelligently scaffold a Docker Compose file for you. It's possible that there is something already available that I don't know about, so let me know in the comments.

Self-hosted choice: startup scripts or sidekick?

If you choose the self-hosted route, then for every microservice you start locally, you also need to start a Dapr sidecar process. The easy way to do this is to just write a script that calls dapr run sets up the various port numbers and locations of the Dapr component definitions and configuration and then calls whatever starts up your microservice (in my case dotnet run). Then you just run this script for every microservice in your application, and attach your debugger to the process of the app you're working on.

Here's a example of a PowerShell script I have to start one of the microservices in my demo application

dapr run `
    --app-id frontend `
    --app-port 5266 `
    --dapr-http-port 3500 `
    --components-path ../dapr/components `
    dotnet run

There is however another nice option I discovered when watching the recent (and excellent) DaprCon conference. The "Dapr sidekick" project is a community-created utility that allows your application automatically launch the Dapr sidecar process on startup (plus some additional nice features such as restarting the sidecar if it goes down). This would be a particularly great option if you're using Visual Studio for development as it would simplify the task of starting up the microservices and automatically attaching the debugger. And it also would make a lot of sense if you were running "self-hosted" Dapr in production (which I think was one of the key motivations for creating Dapr sidekick).

Choice 3 - Visual Studio Code or Visual Studio

If like me you're a .NET developer, then the two main development environments you're likely to be choosing between are Visual Studio 2022 and VS Code.

Visual Studio Code has the advantage of being cross-platform, so would make sense if some or all of your team aren't using Windows. And there is a VS Code Dapr extension that comes with a bunch of helpful convenience features like scaffolding Dapr debugging tasks and components, and interacting with some of the building blocks. This makes VS Code an excellent choice for working on Dapr projects.

However, your dev team may be more familiar with Visual Studio, so I also tried developing with Dapr in Visual Studio 2022. The challenges I found for running self-hosted mode were that VS2022 doesn't seem to offer an easy way to use dapr run instead of dotnet run to start up services. As mentioned above, Dapr sidekick is a potentially good solution to this. I also tried the Docker Compose approach in VS2022. Visual Studio can automatically scaffold Dockerfiles and Docker Compose orchestration files for you which gives you a great start and simplifies your work considerably. You do unfortunately have to add in all the sidecars yourself, and make sure you get the networking right. After several failed attempts I finally got it working, so it is possible, and the advantage of this approach is that you can now just put breakpoints on any of your microservices and you'll hit them automatically.

Choice 4 - Entirely local or shared cloud resource?

The final choice I want to discuss in this post is whether you want to run all your microservices (and all the Dapr component services) locally on your development machine. There are advantages of doing so - you don't incur any cloud costs, and you have your own sandboxed environment. But as a microservices application grows larger, you may find that the overhead of running the entire thing on a single developer machine is using too much RAM.

One way of reducing the resources needed to run locally is for all your dependent services such as databases and services busses to be hosted elsewhere. If you are accessing these via Dapr building blocks, then it's a trivial configuration change to point them at cloud resources.

But you might want to go one step further and start cloud-hosting some of the microservices themselves. However, I'm not sure that the Dapr service invocation components have particularly strong support for a hybrid mode yet (where some microservices run locally and others elsewhere), so it might make more sense to use a cloud-hosted Kubernetes cluster to run the whole thing, and then debug into that. One interesting option is to make use of "Bridge to Kubernetes", which allows you to run your microservice locally but all the other microservices in Kubernetes, and automatically handles the correct routing of traffic between them. Check out this demo from Jessica Deen to see this in action with Dapr and Visual Studio Code.

Other options

There are a few other possible options worth exploring. One is project Tye which is a very promising proof-of-concept project that is particularly good at simplifying starting up many microservices. I think it could work well with Dapr (and there is a sample showing Tye integrated with Dapr), but Tye is still considered "experimental" at the moment. Hopefully it will continue to develop, or the good ideas from Tye can be incorporated into other tools.

The second is a new Azure service, Azure Container Apps, which is currently in preview. It is a very interesting service that simplifies hosting containerized microservices and offers a serverless billing model. Under the hood it uses Kubernetes, but the complexity is abstracted away from you. And it comes with built-in support for Dapr - you just specify that you want to enable Dapr and the sidecars will automatically be injected. I'm quite excited by this service, and assuming its not too hard to debug into it, could be a great option for development as well as production.

Gotchas

One gotcha I ran into with running Dapr locally, is that the built-in service discovery mechanism can conflict with VPNs and other security software in corporate environments. There's an open issue on the Dapr GitHub project to offer a simple way of working round this problem (currently you need to use Consul).

Summary

I really like what Dapr has to offer in terms of simplifying microservice development, but if you want to use it you will need to take some time to decide which mode of local development works best for you and your team. Are you using Dapr for a microservices project? I'd be really interested to hear what choices you've made for running locally.


0 Comments Posted in:

Most developers know the rule that you shouldn't check secrets such as database connection strings or API keys into code. And to be fair, it's been a while since I've seen a production secret stored in source code. But often when developers are writing dev/test utilities, they can be tempted to relax the rules. For example, in production we might fetch a connection string from an environment variable, but when developing locally, we use a hard-coded fallback value that points at a shared cloud-hosted resource.

In this post I want to highlight a variety of simple tools and techniques .NET and Azure developers can use to completely eliminate secrets from source code.

Use command line parameters

First off, a nice and simple one. If you're writing a console application, allow secrets to be passed in as command line parameters. Give a nice error message if the secret isn't passed in. And to make life even easier, consider writing a PowerShell script that fetches the secret (more on how to do that later) and passes it in for you.

Use environment variables

Environment variables are arguably an even better way to get secrets into your code. They can be used with automated testing frameworks like NUnit, and are the default way of making secrets available in a variety of environments including containers and Azure App Service. By using environment variables for your secrets in development, you're also minimising the difference between how your dev and production environments work which is always a good thing.

Use .NET user secrets

.NET comes with a nice capability called "user secrets", which is intended for helping you manage development secrets in ASP.NET Core, but is not limited to ASP.NET Core. You initialize it on a project with a call to:

dotnet user-secrets init

This updates your .csproj file with a UserSecretsId GUID. Then you can store a secret from the command line like this

dotnet user-secrets set "MyApp:MySecret" "abc123"

Visual Studio has built-in tooling to simplify working with user secrets (right-click the project in Solution Explorer, and select "Manage User Secrets").

In code, by default when you're in a development environment, ASP.NET Core will automatically add the user secrets for your application to IConfiguration, making them trivial to access.

If you want to use user secrets in an application that doesn't make use of HostBuilder, you can just reference the Microsoft.Extensions.Configuration.UserSecrets NuGet package and create an IConfiguration object to access them. Here's a very simple example:

var config = new ConfigurationBuilder()
			.AddUserSecrets<Program>()
			.Build();
var userSecret = config["AppSecrets:MySecret"];

Configuration JSON file

Another option that you can use is to have a configuration JSON file that you enter your secrets into, but you don't check into source control (using a .gitignore file to exclude it).

Azure Functions takes this approach with its local.settings.json file. This works OK, but it does require anyone cloning your repo to manually set up their own local.settings.json file, so unless you're using Azure Functions, I would generally avoid this approach.

Good README and error messages

Of course, one of the reasons that we tend to hard-code secrets is that we want the start-up experience for a new developer to be as simple as possible. We want them to clone the code and get going straight away.

If your app needs secrets to be configured before it can run, make sure you include good instructions in the README to explain how to fetch the values and put them in the right place for the application to access. And provide good error messages that tell you what's wrong if you failed to set the secrets up correctly.

Even better, make developers lives as easy as possible by automating the fetching of secrets...

Fetch secrets with the Azure CLI

If you're needing to fetch secrets like Azure Service Bus or Azure Storage connection strings, then my favourite way to do so is with the Azure CLI. It's usually pretty easy to query resources for the secrets you need. Here's an example from my post on managing blob storage that fetches a Storage Account connection string.

$connectionString=az storage account show-connection-string -n $storageAccount -g $resourceGroup --query connectionString -o tsv

Once you've retrieved it, you can easily set it as an environment variable, or pass it as a command line parameter to your application.

Use Azure Key Vault

Of course, not all the secrets you need can be fetched directly with something like the Azure CLI. For example, maybe you have an API key for a service like SendGrid, or a password for an admin account on a VM. In that case, I'd recommend storing it in Azure Key Vault, but you can use any similar secret store.

Again the Azure CLI makes it really easy to retrieve secret values from the Key Vault:

$mysecret = az keyvault secret show --vault-name mykeyvault --name mysecret --query value -o tsv

Use managed identities

Of course, even better than keeping secret connection strings out of code is to have the connection string not contain secrets at all. And that's what managed identities allow us to do. For example managed identities let you connect to an Azure SQL Database using Active Directory authentication. Your connection string doesn't need to include a password, and therefore is no longer technically a "secret":

Server=my-sql-server.database.windows.net,1433;Database=my-database;Authentication=Active Directory Default

It's a little bit of extra work to set this up and grant the correct AD identities permission to access the resource, but again it's something you can automate, so once you've done it once, it's easy to do in the future.

Many of the new Azure SDKs support this mode of connecting, and for local development you can use DefaultAzureCredential which uses a variety of techniques to get hold of your identity including using Azure CLI if you've logged in with az login. Find out more about how it works here (and here's an article I wrote showing this technique in action)

Auto-generate and rotate passwords

Of course, if you have taken the trouble to follow the advice I've given and you automate the lookup of secrets and passwords, then there's no reason for them to be reused "well-known" values. Reusing secrets is something I've unfortunately seen too often in development teams where everything has the same password to make life easier. Once you've automated the process of fetching the password, you are free to use randomly generated strong passwords for everything, and rotate them freely, knowing developers will automatically pick up the latest version next time they run.

Keep hard-coded secrets out of build and deploy pipelines

I've been focusing in this post on the local development environment, but another place hard-coded secrets can sneak in is into build and deploy pipelines as they often need to deal with connections to various online resources to store or retrieve assets like NuGet packages or container images. Whether you're using TeamCity or Azure Pipelines or GitHub Actions, all of these provide a way for you to securely enter secrets that can be made available to the build scripts.

Bonus - LINQPad Password Manager

As a bonus extra, I'm a big fan of LINQPad, which is a great tool for creating simple experimental scripts. Often in a LINQPad script you are connecting out to an external resource, and so again there is a big temptation to just hard-code a password or secret. But there's no need. LINQPad has a "password manager" that can securely store your secrets for you. In the script, just call Util.GetPassword("MySecretName"). This will return the stored secret with that name, or prompt you to provide one if its not available

Summary

The temptation to hard-code a secret is great, but there are plenty of good alternatives available to you. There really is no excuse to check secrets into source control anymore. Did I miss any useful techniques? Let me know in the comments.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.

0 Comments Posted in:

Whenever a new development technology is announced I'm usually near the front of the queue to try it out. Over recent years this has included things like Durable Functions, Kubernetes, Dapr, Tye, Pulumi, Blazor, etc, etc. I like to dive in deep and try them out, often while they're still in pre-release to evaluate whether they would be useful for my own projects. Out of that I'll often blog about my findings and sometimes do conference or user group sessions.

Sometimes I find that people are surprised to hear that although I might be blogging and talking about a new framework, I'm relatively slow to actually start using it in production. That's because although I think there is benefit in being quick to evaluate new technologies, I also think there is wisdom in being relatively slow to actually adopt them.

Benefits of being an early evaluator

Here's some of my top reasons for being quick to evaluate new technologies.

First, with any new tech, don't believe the hype, try it for yourself. Every new framework is launched with a fanfare of spectacular claims about how it will revolutionise everything. Occasionally something really does live up to the hype, but the best way to get a realistic picture of what a new technology can do for you is to actually try it out. I like to build something small that is representative of the type of use I'd want to put it to in the real world.

Second, ask the question, does it solve a problem I actually have? The most common motivation for creating a new framework is that it solves a limitation or weakness of existing tools. But the problem it solves may not be the problem you have. Kubernetes is awesome, but if you're just building a blog its probably overkill.

Third, I want to find the missing features. No new framework drops feature complete in version 1. It maybe does 80% of what you want, but quite often I find that there is a missing feature that is a showstopper. This might be something like regulatory compliance. In this case, you simply have to park your interest in the new framework until it offers the functionality you need.

Fourth, by trying things out early, you have a real chance to shape the roadmap. When new projects are in beta, they are typically very open to suggestions for improvements, and willing to make changes of direction to accommodate use cases the creators hadn't initially considered. These sorts of changes become much harder to make once an initial v1 has been released, so it's good to get involved in the discussion early.

Benefits of being slow adopter

Why do I tend to be a slow adopter, even when my initial evaluation of a new technology is very positive? A few reasons:

First, I like to give it time to stabilise. Early in the lifecycle of a new framework, breaking changes tend to be more frequent. This can be very frustrating if you were quick to go to production and you find yourself forced to spend a lot of time migrating to new versions.

Second, you also need to give it time for its feature set to become well-rounded. As I said above, new frameworks might do 80% of what's needed, but there are often rough edges and missing features in areas like observability and resilience. Jumping in too quickly can cause a lot of pain and sour people's opinion of the new framework.

Third, leave time for its weaknesses and flaws to become apparent. There will be plenty of people who do rush to production with a new technology, and if they find that it falls over at scale, or has unanticipated pain points, they will be quick to share their negative experiences on social media. This can give you valuable information about the types of problem that you might encounter yourself that weren't apparent with a simple demo app.

Fourth, leave some time for competitors to emerge. Sometimes an innovative idea sparks other new frameworks that take that idea an go even further. This seems to happen quite frequently in the JavaScript world for example with a series of SPA frameworks each stealing the best ideas from each other and adding a few improvements of their own.

Fifth, sometimes even when you really want to adopt a new technology as soon as possible, legacy code gets in the way of moving quickly. If you're not on a greenfield project, you often need to strategise about how you will migrate your legacy code to the new framework. Depending on how much of it there is, this may need to be a multi-year strategy.

Sixth, you need to reserve sufficient time for training developers and operations. For example, to effectively use Azure Durable Functions, there are some really important rules you need to know about what can and can't be done in an orchestrator function. And to troubleshoot failed orchestrations, there are some tools and techniques that operations staff need to be familiar with. Rushing out a new technology without sufficient training is a recipe for disaster.

Finally, I am very aware of the need to avoid over-promising and under-delivering with a new technology. It's easy enough to say "if we just move to Kubernetes it will solve all of our deployment issues, and give us huge cost savings". But then if the initial roll-out doesn't go smoothly (which is often the case when you're using something new for the first time), there will soon be recriminations.

Summary

I like to devote some time each month to learning about and trying out what's new. Resources like the Thoughtworks Technology Radar are great for this. But I am much more cautious about actually pushing something new into production. Do you agree with my approach? Let me know your thoughts in the comments.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.