0 Comments Posted in:

Whenever a new development technology is announced I'm usually near the front of the queue to try it out. Over recent years this has included things like Durable Functions, Kubernetes, Dapr, Tye, Pulumi, Blazor, etc, etc. I like to dive in deep and try them out, often while they're still in pre-release to evaluate whether they would be useful for my own projects. Out of that I'll often blog about my findings and sometimes do conference or user group sessions.

Sometimes I find that people are surprised to hear that although I might be blogging and talking about a new framework, I'm relatively slow to actually start using it in production. That's because although I think there is benefit in being quick to evaluate new technologies, I also think there is wisdom in being relatively slow to actually adopt them.

Benefits of being an early evaluator

Here's some of my top reasons for being quick to evaluate new technologies.

First, with any new tech, don't believe the hype, try it for yourself. Every new framework is launched with a fanfare of spectacular claims about how it will revolutionise everything. Occasionally something really does live up to the hype, but the best way to get a realistic picture of what a new technology can do for you is to actually try it out. I like to build something small that is representative of the type of use I'd want to put it to in the real world.

Second, ask the question, does it solve a problem I actually have? The most common motivation for creating a new framework is that it solves a limitation or weakness of existing tools. But the problem it solves may not be the problem you have. Kubernetes is awesome, but if you're just building a blog its probably overkill.

Third, I want to find the missing features. No new framework drops feature complete in version 1. It maybe does 80% of what you want, but quite often I find that there is a missing feature that is a showstopper. This might be something like regulatory compliance. In this case, you simply have to park your interest in the new framework until it offers the functionality you need.

Fourth, by trying things out early, you have a real chance to shape the roadmap. When new projects are in beta, they are typically very open to suggestions for improvements, and willing to make changes of direction to accommodate use cases the creators hadn't initially considered. These sorts of changes become much harder to make once an initial v1 has been released, so it's good to get involved in the discussion early.

Benefits of being slow adopter

Why do I tend to be a slow adopter, even when my initial evaluation of a new technology is very positive? A few reasons:

First, I like to give it time to stabilise. Early in the lifecycle of a new framework, breaking changes tend to be more frequent. This can be very frustrating if you were quick to go to production and you find yourself forced to spend a lot of time migrating to new versions.

Second, you also need to give it time for its feature set to become well-rounded. As I said above, new frameworks might do 80% of what's needed, but there are often rough edges and missing features in areas like observability and resilience. Jumping in too quickly can cause a lot of pain and sour people's opinion of the new framework.

Third, leave time for its weaknesses and flaws to become apparent. There will be plenty of people who do rush to production with a new technology, and if they find that it falls over at scale, or has unanticipated pain points, they will be quick to share their negative experiences on social media. This can give you valuable information about the types of problem that you might encounter yourself that weren't apparent with a simple demo app.

Fourth, leave some time for competitors to emerge. Sometimes an innovative idea sparks other new frameworks that take that idea an go even further. This seems to happen quite frequently in the JavaScript world for example with a series of SPA frameworks each stealing the best ideas from each other and adding a few improvements of their own.

Fifth, sometimes even when you really want to adopt a new technology as soon as possible, legacy code gets in the way of moving quickly. If you're not on a greenfield project, you often need to strategise about how you will migrate your legacy code to the new framework. Depending on how much of it there is, this may need to be a multi-year strategy.

Sixth, you need to reserve sufficient time for training developers and operations. For example, to effectively use Azure Durable Functions, there are some really important rules you need to know about what can and can't be done in an orchestrator function. And to troubleshoot failed orchestrations, there are some tools and techniques that operations staff need to be familiar with. Rushing out a new technology without sufficient training is a recipe for disaster.

Finally, I am very aware of the need to avoid over-promising and under-delivering with a new technology. It's easy enough to say "if we just move to Kubernetes it will solve all of our deployment issues, and give us huge cost savings". But then if the initial roll-out doesn't go smoothly (which is often the case when you're using something new for the first time), there will soon be recriminations.

Summary

I like to devote some time each month to learning about and trying out what's new. Resources like the Thoughtworks Technology Radar are great for this. But I am much more cautious about actually pushing something new into production. Do you agree with my approach? Let me know your thoughts in the comments.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

At the time of writing this post, we're close to the release of .NET 6 and Azure Functions version 4 is available in private preview

For the most part, changes to Azure Functions are evolutionary rather than revolutionary. The vast majority of what you already know about writing Azure Functions still applies.

In-Process and Isolated Modes

However, in this post I want to discuss an important decision you need to make when creating new Azure Function apps using C#, and that's whether to use "in-process" or "isolated" mode.

The "in-process" model is the original way .NET Azure Functions ran. The Azure Functions runtime, which itself runs on .NET, simply loaded your functions into the same process. However, all other languages supported by Azure Functions (such as JavaScript, Python etc) use an out-of-process model where the Azure Functions runtime talks to your functions which are running in a separate process.

Both approaches have their benefits, and this means that there is going to be a period of time where the decision of which model to chose is not obvious, because the new isolated mode still has a few limitations.

Let's quickly look at the benefits of the two modes.

Benefits of Isolated Mode

The Azure Functions roadmap makes it clear that the isolated process model is the future and will be the only choice from .NET 7 onwards. This means that if you do choose isolated mode, the upgrade path will be simpler in the future.

Azure .NET Functions Roadmap

One of the key motivations for isolated mode, is that you are not tied to the same version of the .NET runtime and any Azure SDKs that the Azure Functions runtime happens to be using. For most people this is not an issue, but when it does block you from using something you need it is quite frustrating.

Isolated mode also gives you a quicker route to using the latest C# language features (although there are work-arounds for in-process functions).

Isolated mode also will feel more familiar to ASP.NET Core developers allowing you to set up dependency injection and middleware in the exact same way that you are used to.

Benefits of In-Process Mode

One key advantage of staying with in-process mode is that it currently is required for Durable Functions. To write Durable Functions in isolated mode, you need to wait until .NET 7.

In process-mode also enjoys some performance advantages inherent to the way it works. It can allow you to bind directly to types exposed by the Azure SDKs. By contrast, isolated mode does not allow you to directly use IAsyncCollector or ServiceBusMessage which is a bit of a step backwards and hopefully the shortcoming will be rectified in future versions of Azure Functions. (some of these limitations can be worked around as Sean Feldman discusses here)

My Recommendation

If you are using Durable Functions, or still relying on binding to specific types not supported with isolated mode then I recommend staying with in-process functions on .NET 6. Also if you currently rely on any of the binding types not supported in isolated mode then it is also probably simpler to stay with in-process for now.

Of course, if the limitations don't affect you, feel free to start using isolated mode now. The Azure Functions documentation has a good breakdown of the current differences between in-process and out-of-process functions which should help you make the decision.

Hopefully by the time .NET 7 is released, there will be sufficient improvements to the isolated process model to make it an obvious default choice for all new Azure Function Apps.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

I wrote recently about the benefits of adopting Dapr in a microservices application. In that post I focused on the "service invocation" building block. In this post, I want to highlight a particularly useful capability that is exposed by the "bindings" building block.

Dapr bindings

The concept of "bindings" in Dapr will be familiar to anyone who has worked with Azure Functions. They expose a simplified way of interacting with a wide variety of third party services.

Bindings can be "input" or "output". An input binding (also called a "trigger") allows Dapr to subscribe to events in external systems and call an endpoint on your service for you to let you know what has happened. Good examples in Azure would be subscribing to events on Event Grid or messages on Service Bus. But there are many supported bindings, including things like Twitter so you can get notified whenever something is tweeted matching your search criteria.

An output binding allows you to send data to an external service. In Azure this might be posting a message to a queue, writing a document to Cosmos DB. Or you could use it to send an SMS with Twilio.

Bindings benefits and weaknesses

One strength of bindings is that they can greatly simplify your application code as they remove a lot of the cumbersome boilerplate typically required to connect to the service.

Another advantage is that they provide a certain level of abstraction. Whilst some bindings can't be swapped out with other alternatives due to the service-specific nature of the data they deal with, the ability to swap out components has potential to be very useful in dev/test environments, where you may not want or need to actually communicate with a real service.

The key weakness of bindings is that they usually expose a fairly limited subset of capabilities of the underlying platform. So if you are a power user, then you may prefer to just use the SDK of the service directly. And of course Dapr doesn't prevent you from doing that - bindings are completely optional.

The cron binding

The binding I want to focus on particularly is a bit of a special case. It's the "cron" binding. Rather than supporting connection to an external system, this makes it easy to set up scheduled tasks.

To set this up, you need to define a component YAML file. You can just copy an example, and customise the schedule to meet your needs. This supports a regular cron syntax and some simplified shortcuts like @every 5m for every five minutes as shown below.

The only 'advanced' thing I've done is limited this component to only apply to a single Dapr service by using the scopes property - in this example the catalog service.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: scheduled
  namespace: default
spec:
  type: bindings.cron
  version: v1
  metadata:
  - name: schedule
    value: "@every 5m"
scopes:
- catalog

Now all we need to do is listen on an endpoint that matches the name of our component. In this example it's called scheduled. Note that this will be made as a HTTP POST request, so in the example below I'm showing how a simple Node.js Express application can receive calls on the /scheduled endpoint and write a message to the console.

app.post('/scheduled', async function(req, res){
  console.log("scheduled endpoint called", req.body)
  res.status(200).send()
});

If we run this, we'll see that the /scheduled endpoint is called every five minutes by the Dapr sidecar.

And that's all there is to it. Of course, Dapr doesn't force you to use any of its building blocks, so if you already have a solution for scheduled tasks, then feel free to keep using it. But if not, it's great that Dapr provides such a simple to use option out of the box.