﻿<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
  <channel><title>Sound Code - Mark Heath's Blog</title>
<description>Mark Heath's development blog.</description>
<generator>MarkBlog</generator>
<link>https://markheath.net/</link>
<item>
  <title>NAudio Modernization with Claude Code</title>
  <link>https://markheath.net/post/2026/5/1/naudio-modernization-claude-code</link>
  <description>&lt;p&gt;Almost 25 years ago, I created &lt;a href="https://github.com/naudio/NAudio/"&gt;NAudio&lt;/a&gt;, an open-source audio library for .NET. Over the years I've had periods where I've done a lot of work on it, and periods where I barely touched it. That's certainly been the case recently, partly because I've been busy with other projects, and partly because creating a version 3 of NAudio requires extensive modernization and refactoring, which under normal circumstances would be impossible for me to find time for.&lt;/p&gt;
&lt;h2 id="claude-code"&gt;Claude Code&lt;/h2&gt;
&lt;p&gt;However recently, Anthropic's &lt;a href="https://claude.com/contact-sales/claude-for-oss"&gt;&amp;quot;Claude for Open Source&amp;quot; program&lt;/a&gt; very generously offered me six month's free access to their &amp;quot;20x Max subscription plan&amp;quot;. This allowed me to try Claude Code for the first time (I'd been mainly using &lt;a href="https://github.com/features/copilot"&gt;GitHub Copilot&lt;/a&gt; and &lt;a href="https://gemini.google.com/"&gt;Google Gemini&lt;/a&gt; previously), and gave me freedom to attempt some extremely ambitious coding tasks without worrying about burning through my token allowance too quickly.&lt;/p&gt;
&lt;p&gt;I decided to run an experiment to see how well Claude Code could assist with modernizing the NAudio codebase, including adding some of my most wanted features that were previously out of scope due to their size.&lt;/p&gt;
&lt;h2 id="modernization"&gt;Modernization&lt;/h2&gt;
&lt;p&gt;The first item on my to-do list was a wide-ranging modernization of the NAudio codebase. Right from the start of NAudio I've always tried to support as many versions of Windows and .NET as possible, and while I'm proud of how long I have kept that going, it has got in the way of adopting new features from .NET Core.&lt;/p&gt;
&lt;p&gt;For example, &lt;a href="https://markheath.net/post/span-t-audio"&gt;the &lt;code&gt;Span&amp;lt;T&amp;gt;&lt;/code&gt; feature is perfect for NAudio&lt;/a&gt;, but to fully embrace it means dropping support for the legacy .NET Framework. Another area that was in need of an overhaul was the COM interop, moving to the newer &lt;a href="https://learn.microsoft.com/en-us/dotnet/standard/native-interop/comwrappers-source-generation"&gt;&lt;code&gt;[GeneratedComInterface]&lt;/code&gt;&lt;/a&gt; approach instead of &lt;code&gt;[ComImport]&lt;/code&gt; which opens the door to supporting &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/deploying/trimming/trim-self-contained"&gt;IL trimming&lt;/a&gt; and &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot/?tabs=windows%2Cnet8"&gt;Native AOT&lt;/a&gt;. I also wanted to tidy up the project structure, to clearly distinguish between the Windows specific, and cross-platform capabilities of NAudio.&lt;/p&gt;
&lt;h2 id="pair-programming-with-ai"&gt;Pair Programming with AI&lt;/h2&gt;
&lt;p&gt;What makes using coding assistants like Claude Code so fun is being able to treat them as an expert pair programmer, and run your crazy ideas past it. It's a great way to discover alternative approaches you hadn't thought of.&lt;/p&gt;
&lt;p&gt;This was particularly valuable for revisiting some of my original API design decisions that I wasn't happy with. Some of these were due to my own inexperience, while others made sense in the past but are now well past their best-before date.&lt;/p&gt;
&lt;p&gt;Examples include rethinking the approach to supporting custom chunks in &lt;code&gt;WaveFileReader&lt;/code&gt; (which currently requires inheritance for every customized chunk), reconsidering how best to allow you to decorate a &lt;code&gt;WaveStream&lt;/code&gt; with &lt;code&gt;ISampleProvider&lt;/code&gt; effects without needing to hold a reference to the start and end of the chain in order to support repositioning, and considering whether the very Windows MME-centric &lt;code&gt;WaveFormat&lt;/code&gt; class (based on &lt;code&gt;WAVEFORMATEX&lt;/code&gt;) is the best approach or whether I should make a more generic abstraction (e.g. &lt;code&gt;AudioFormat&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;For each of these big design decisions, including how to make ASIO playback and recording much more pleasant to work with, I held an in-depth discussion with Claude Code up front. This helped solidify API design and make naming decisions that I was happy with, as well as thrashing out implementation plans that include testing and documentation.&lt;/p&gt;
&lt;p&gt;With a well-defined concrete plan in place, in most cases I was able to just let Claude get on with the implementation. I did find myself needing to interrupt and course correct sometimes, but often I'd just do a thorough code review at the end. I'm still convinced that a manual code review of AI-generated code is vital. It regularly uncovered issues that I hadn't considered up front, and often resulted in several additional rounds of refactoring.&lt;/p&gt;
&lt;h2 id="test-coverage"&gt;Test Coverage&lt;/h2&gt;
&lt;p&gt;AI assistants can be quite lazy about testing. They are so focused on achieving the 'goal' that they'll happily write code with no tests at all, or ask you to manually try things out for them and report back! You need to be clear about what level of testing you expect from them.&lt;/p&gt;
&lt;p&gt;There are several areas of NAudio that were lacking in their unit test coverage. For example, Fast Fourier Transforms and pitch shifting algorithms are not straightforward to validate, especially if you don't fully trust your own DSP skills. So it was useful to get Claude Code to introduce sanity checks for some of these trickier areas.&lt;/p&gt;
&lt;p&gt;Of course many NAudio capabilities require true &amp;quot;integration/end-to-end&amp;quot; testing. I need to play and record audio through real soundcards, and listen to the output of various operations such as decoding MP3s or applying audio effects in order to be confident that things are working as expected.&lt;/p&gt;
&lt;p&gt;For years I've mostly made use of two test harnesses - one WinForms and one WPF app. These are very useful, but I've always wanted a console-based test harness, with a menu system that I could use to pick which test to run, but that would also support scripting so it could automatically run through a series of tests, and generate a test report, recording what tests were run, any errors encountered, and details of the system on which it was run. This would make it much easier for NAudio users to submit bug reports.&lt;/p&gt;
&lt;p&gt;I got Claude Code to quickly scaffold my console test harness idea, and while it's still far from finished, it has already greatly accelerated the speed at which I can validate new features. It's an example of where a more &amp;quot;vibe coding&amp;quot; approach can be used - where you don't really need detailed scrutinization of all of the code generated, but just try it and see if it works. With auxiliary utilities like this, the stakes are a lot lower and technical debt is not really a major concern.&lt;/p&gt;
&lt;h2 id="performance-optimizations"&gt;Performance Optimizations&lt;/h2&gt;
&lt;p&gt;One of the primary goals of introducing &lt;code&gt;Span&amp;lt;T&amp;gt;&lt;/code&gt; into NAudio was improved performance, but just updating the public interface to use &lt;code&gt;Span&amp;lt;T&amp;gt;&lt;/code&gt; wasn't enough. I had to flow that right through many of the classes in NAudio all the way into the interop layer, eliminating as many unnecessary copies as possible. Again this was something that Claude Code was able to greatly accelerate as it was able to search through hundreds of files and propose strategies for wide-scale refactoring that previously would have taken me days to plan.&lt;/p&gt;
&lt;p&gt;It was also able to take additional steps to improve performance. For example there are some parts of NAudio that would benefit from vectorization and SIMD optimizations, which are not my speciality at all. But with sufficient unit tests in place, I could safely implement the vectorization performance optimizations that it had recommended and backed them up with a &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt; project to validate and quantify how much faster the new code actually was.&lt;/p&gt;
&lt;h2 id="memory-management-and-interop"&gt;Memory Management and Interop&lt;/h2&gt;
&lt;p&gt;A large part of NAudio consists of COM interop to Windows APIs, and that poses some tricky memory management challenges. In particular, COM works by reference counting, but .NET uses a mark and sweep approach to garbage collection and so bridging the two worlds can be complicated, and it can be difficult to know when it's safe to dispose of COM objects. I was able to discuss this problem with Claude Code and come up with a consistent strategy for how I wanted memory management to behave. And then was able to quickly get it to roll that out across all of the WASAPI API wrappers in NAudio.&lt;/p&gt;
&lt;p&gt;I was also able to get it to audit the coverage of Windows audio APIs and identify missing capabilities. This has allowed me to fill in a number of key gaps in NAudio. It wasn't all plain sailing though. Some of the capabilities offered by the Windows audio APIs have proved incredibly challenging to successfully wrap in C#. One of the most difficult so far is capturing audio from a specific process, which I'd spent many hours trying to do manually before using AI and every time failed miserably. And when I tried using Claude Code I ran into exactly the same problems and have had several failed attempts. I've not completely given up yet - hopefully the next try will be the successful one, but it certainly feels a lot less risky to attempt tasks like this now - a failed attempt is now just a few hours rather than days of wasted time.&lt;/p&gt;
&lt;h2 id="challenging-features"&gt;Challenging Features&lt;/h2&gt;
&lt;p&gt;Although NAudio has lots of features, there are many missing capabilities that I would love to offer but have simply been too difficult for me to implement. Often it's a skill issue - I don't have enough deep understanding of digital signal processing or interop. But it's just as often a time constraint problem.&lt;/p&gt;
&lt;p&gt;One of the great things about AI assistants like Claude Code is that (almost) no task is too daunting to attempt. Now I can realistically consider taking on challenges like creating my own synthesiser or VST3 plugin wrapper. So it's been really enjoyable with Claude Code to start tackling some of the more ambitious ideas on my backlog.&lt;/p&gt;
&lt;p&gt;As a simple example of something I really struggled with many years back was creating a spectrum analyzer visualization in WPF. This required me to work out what the best FFT windowing function to use was, and decide things like whether I should use linear or logarithmic scales for each axis.&lt;/p&gt;
&lt;p&gt;So it was fascinating to talk through all of my questions with Claude Code and discuss each of the existing design decisions and my concerns about what I'd got wrong and what I wanted to be improved. Within a short period of time it had discovered several mistakes in my original implementation, and created a much better visualization.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://markheath.net/posts/files/naudio-modernization-1.png" alt="NAudio FFT spectrum analyser display" /&gt;&lt;/p&gt;
&lt;h2 id="bug-and-pr-backlog"&gt;Bug and PR backlog&lt;/h2&gt;
&lt;p&gt;One of my biggest regrets as a maintainer of an open source project like NAudio is that it has simply not been possible for me to keep up with the rate of issues and PRs that I've received. Several years ago I reached the point where I wasn't able to reply to every single issue any more. That means there's probably many valid bugs and feature requests that deserve to be looked at, and also probably many excellent contributions sitting idle that are worthy of being merged into the NAudio code base.&lt;/p&gt;
&lt;p&gt;So one of the next tasks I have with Claude Code is to see if it can help me triage all of these legacy issues and pull requests. This will let me close the ones that don't make sense to keep open any more but also to respond to many of the existing issues, and fix the bugs that have been reported. For pull requests, the substantial changes in NAudio 3 will mean they won't necessarily merge easily, but it should be possible to take the key ideas and reimplement them in a way that fits with the NAudio 3 design.&lt;/p&gt;
&lt;p&gt;I've made a small start on this recently, so if you're wondering why your bug report from 2017 is suddenly being looked at, you'll know why!&lt;/p&gt;
&lt;h2 id="documentation"&gt;Documentation&lt;/h2&gt;
&lt;p&gt;Documenting a library like NAudio is a major task, and although I've written many blog posts and tutorials, there's certainly a lot of scope for improvement. Again this is something Claude Code is able to help a lot with. I've already asked it to audit all of the existing documentation, check it for mistakes and correct it.&lt;/p&gt;
&lt;p&gt;I've also used it to draft tutorials for the new features, and I'm also eager to use it to generate a migration guide. This will be especially valuable for NAudio 3, as I have decided to allow myself to make a number of strategic breaking changes to the API. A good migration document should allow users to point their own coding assistants at it and get upgraded relatively painlessly.&lt;/p&gt;
&lt;h2 id="can-i-trust-its-output"&gt;Can I trust its output?&lt;/h2&gt;
&lt;p&gt;Perhaps the biggest question in the software development industry at the moment is this - can we &lt;em&gt;really&lt;/em&gt; trust AI coding assistants to create high-quality, production-ready code? Are we in danger of just accepting code that seems superficially correct, but under the hood and behind the scenes, significant bugs or architectural issues have been introduced?&lt;/p&gt;
&lt;p&gt;Certainly, it's not all been plain sailing even with Claude Code's most powerful Opus models. In fact, some of the recent work I've used it on to completely rewrite the COM interop has surfaced some extremely challenging access violations that have taken hours to troubleshoot (and in fact as I write this there's still a really nasty one I'm struggling to get to the bottom of).&lt;/p&gt;
&lt;p&gt;AI coding assistants can fail in less spectacular ways as well. They might simply ignore instructions, or accidentally drop an important line of code while refactoring some code. Or they might decide to implement a requirement that I didn't ask for or want. Or everything might seem great but after &amp;quot;speed-running&amp;quot; your way through several large features you discover you hadn't fully validated an earlier feature and now have to go back and unpick the mess.&lt;/p&gt;
&lt;p&gt;I've been trying to be disciplined with thorough manual testing and careful reading of all of the code that Claude has generated, asking it questions and challenging its decisions. Often this leads to a much better implementation. But I must also admit that there have been times when I don't fully understand its changes, because it's doing things that are outside my comfort zone, such as modernizing the COM interop mechanisms.&lt;/p&gt;
&lt;p&gt;This means I've spent a lot of time running manual tests, trying to find edge cases and race conditions. I'm determined that NAudio 3 is not going to just be a bunch of AI slop, but of course I can't guarantee that it will be bug free. For this reason I intend to release some early &amp;quot;alpha&amp;quot; versions of NAudio 3, allowing people to give feedback on the architectural changes as well as report bugs.&lt;/p&gt;
&lt;h2 id="x-speed-up"&gt;10x Speed-up?&lt;/h2&gt;
&lt;p&gt;Without doubt, AI has accelerated my progress way beyond what I could have achieved manually. But ironically, I've probably put in way more hours on NAudio 3 as a result of this speed-up than I was ever likely to have done without AI assistance. There are a few reasons for that - one is that the increased speed often results in me expanding the scope accordingly - attempting much more ambitious tasks than I would have previously dared. Another is that some of the access violations introduced by changes to the interop proved extremely time-consuming to root cause - you end up pulling the AI slot-machine lever repeatedly, hoping that this time it will fix the bug.&lt;/p&gt;
&lt;p&gt;Another factor is that the huge speed increase allows you to try out wide-scale changes and then completely backtrack on them. The cost of prototyping has dropped dramatically, but that doesn't always result in as much of a speed increase as you might imagine, as you often allow yourself to go much further down a dead end before walking it back.&lt;/p&gt;
&lt;h2 id="when-is-naudio-3-coming"&gt;When is NAudio 3 coming?&lt;/h2&gt;
&lt;p&gt;I've been working on this modernisation of NAudio for over a month now and have committed a lot of work which you can find on &lt;a href="https://github.com/naudio/NAudio/tree/naudio3dev"&gt;the &lt;code&gt;naudio3dev&lt;/code&gt; branch in GitHub&lt;/a&gt; if you're interested. There is still a lot that needs to be done, in terms of features to add, design decisions to be finalized and bugs to be fixed. I'm hoping to be in a place fairly soon where I can publish some pre-release NuGet packages allowing people to try out the changes and give me feedback. Hopefully people will be understanding of the reasoning behind making a number of breaking changes to the public API, but if there is enough pushback there will be time to reconsider some of the choices.&lt;/p&gt;
&lt;p&gt;I'm also hoping to find some time to blog about a number of the important decisions so watch this space.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>NAudio</category>
  <category>AI</category>
  <category>Open Source</category>
  <category>Claude</category>
  <category>.NET</category>
  <category>Audio</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/5/1/naudio-modernization-claude-code</guid>
  <pubDate>Fri, 01 May 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Refactoring to SOLID in C#</title>
  <link>https://markheath.net/post/2026/4/13/refactor-solid-csharp</link>
  <description>&lt;p&gt;I'm really pleased to announce the publication of my latest Pluralsight course, &lt;a href="https://www.pluralsight.com/courses/c-sharp-14-refactoring-solid"&gt;&amp;quot;Refactoring to SOLID in C# 14&amp;quot;&lt;/a&gt;. It aims to provide C# developers with practical techniques and strategies to tackle the unique challenges of working in legacy codebases, such as dealing with technical debt, modernizing outdated dependencies, fixing code smells, and improving test coverage.&lt;/p&gt;
&lt;h2 id="legacy-code"&gt;Legacy Code&lt;/h2&gt;
&lt;p&gt;Most professional software developers will spend a significant proportion of their careers working on &amp;quot;legacy codebases&amp;quot;. By &amp;quot;legacy&amp;quot;, I simply mean that the codebase is several years old, has had many different developers working on it, and continues to be actively maintained.&lt;/p&gt;
&lt;p&gt;Legacy code isn't necessarily bad, but it's not uncommon for problems to gradually accumulate over time, making the codebase progressively harder to work with as time goes on.&lt;/p&gt;
&lt;h2 id="code-smells"&gt;Code Smells&lt;/h2&gt;
&lt;p&gt;Anyone who has worked on a legacy codebase will be all too familiar with the concept of &amp;quot;&lt;a href="https://en.wikipedia.org/wiki/Code_smell"&gt;code smells&lt;/a&gt;&amp;quot;, made popular by Martin Fowler. It's common to spend hours or even days navigating through various files, trying to understand how something works and wondering why on earth it was implemented this way. And while you maybe can't put your finger on exactly what's wrong with the code - it's clear that in its current state it's difficult to maintain, and difficult to understand.&lt;/p&gt;
&lt;h2 id="test-coverage"&gt;Test Coverage&lt;/h2&gt;
&lt;p&gt;Of course, identifying problems in legacy code is easy enough, but fixing them is risky. And the main reason for this is a lack of confidence that the test coverage we have is sufficient. It's often the time required to thoroughly test our changes that proves the main blocker to addressing code smells.&lt;/p&gt;
&lt;p&gt;One approach that's worth considering is Michael Feathers' concept of &lt;a href="https://michaelfeathers.silvrback.com/characterization-testing"&gt;&amp;quot;Characterization Tests&amp;quot;&lt;/a&gt;. These are tests designed to capture the &lt;em&gt;current&lt;/em&gt; behaviour of the system, rather than the &amp;quot;correct&amp;quot; behaviour. The advantage of these tests is that they can alert you to any regressions introduced by refactoring.&lt;/p&gt;
&lt;p&gt;Of course you can ask AI to generate test coverage for you, although it often doesn't have a great grasp of what the &amp;quot;correct&amp;quot; behaviour is - it simply infers what's supposed to happen from what the code already does. So almost by definition, the tests an AI will generate on your behalf are characterization tests.&lt;/p&gt;
&lt;p&gt;One pitfall to be aware of with characterization tests though is that you can inadvertently &amp;quot;lock in&amp;quot; undesirable behaviour, as future developers (or agents) assume that the tests are protecting some important functionality.&lt;/p&gt;
&lt;h2 id="refactoring-strategies"&gt;Refactoring Strategies&lt;/h2&gt;
&lt;p&gt;Refactoring is safest when done in small, incremental steps, testing your work as you go along. Tools like Visual Studio include some built-in refactorings such as renaming variables, extracting classes etc, and these should be used wherever possible as they are deterministic.&lt;/p&gt;
&lt;p&gt;If you are making widespread changes, it's worth familiarizing yourself with techniques such as &lt;a href="https://martinfowler.com/bliki/BranchByAbstraction.html"&gt;&amp;quot;branch by abstraction&amp;quot;&lt;/a&gt;, and the &lt;a href="https://martinfowler.com/bliki/StranglerFigApplication.html"&gt;&amp;quot;Strangler fig&amp;quot; pattern&lt;/a&gt;, which are both designed to help you gradually replace legacy components without having to change everything in one go.&lt;/p&gt;
&lt;p&gt;There's a very real danger though with any large refactoring initiative that it will stall mid-way through. This can result in an even more convoluted and confusing architecture. So be careful of starting what you can't finish.&lt;/p&gt;
&lt;h2 id="solid-principles"&gt;SOLID Principles&lt;/h2&gt;
&lt;p&gt;In my new course I spend a couple of modules exploring how refactoring code to adhere to the &lt;a href="https://en.wikipedia.org/wiki/SOLID"&gt;SOLID&lt;/a&gt; principles can help a lot with software maintainability, testability and extensibility. The five &amp;quot;SOLID&amp;quot; principles have proved themselves to be very helpful guidelines over the years, but they aren't necessarily the whole picture.&lt;/p&gt;
&lt;p&gt;It's also worth exploring complementary ideas such as &lt;a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself"&gt;DRY&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it"&gt;YAGNI&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/KISS_principle"&gt;KISS&lt;/a&gt;, &lt;a href="https://www.milanjovanovic.tech/blog/clean-architecture-dotnet"&gt;Clean Architecture&lt;/a&gt;, &lt;a href="https://dannorth.net/blog/cupid-for-joyful-coding/"&gt;CUPID&lt;/a&gt;, and &lt;a href="https://www.markheath.net/post/stable-tactics-for-writing-solid-code"&gt;STABLE&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A key benefit that all of these various &amp;quot;principles&amp;quot; provide, is that they give us lenses through which to evaluate our code, and vocabulary to talk about the problems we encounter. They help us move past the vague &amp;quot;code smell&amp;quot; sense that something is not quite right, to being able to articulate what the problem is and formulate a plan to remediate it.&lt;/p&gt;
&lt;h2 id="app-modernization"&gt;App Modernization&lt;/h2&gt;
&lt;p&gt;If you have a large codebase that's more than about five years old, then it's highly likely that it's in need of some modernization. New versions of tools, frameworks and dependencies are constantly coming out, and the programming language itself moves on with new features. Unless you are very disciplined, it's easy to get left behind, and while most tech upgrades are relatively straightforward, every now and then you'll find the migration is non-trivial and you get stuck for some reason.&lt;/p&gt;
&lt;p&gt;The further behind you get, the harder it becomes to upgrade and before you know it you find yourself in a situation where the libraries you depend on have critical security vulnerabilities but are no longer being maintained. You may even find that your hosting platform no longer supports running the framework you're using.&lt;/p&gt;
&lt;p&gt;App modernization is an area that AI agents can be particularly helpful with, especially if you give them access to the official migration guides. That's essentially what the &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/porting/github-copilot-app-modernization/overview"&gt;GitHub Copilot modernization agent&lt;/a&gt; is. If you've not tried asking an AI agent to help you modernize an app, it's something that's definitely worth experimenting with - you might be surprised at the results.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Legacy codebases can seem daunting to work on, but with the right tools and techniques at your disposal it can be a very rewarding experience to slowly and steadily improve a legacy codebase. If you have access to Pluralsight's excellent library of training courses, then do consider checking out my new &lt;a href="https://www.pluralsight.com/courses/c-sharp-14-refactoring-solid"&gt;Refactoring to SOLID in C#&lt;/a&gt; course in which I go into a lot more detail about all of these topics. And you don't need to wait until your codebase is a mess to start learning about these topics - refactoring should be an ongoing part of day-to-day development, even on a brand new application.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Pluralsight</category>
  <category>C#</category>
  <category>SOLID</category>
  <category>Refactoring</category>
  <category>Technical Debt</category>
  <category>Legacy Code</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/4/13/refactor-solid-csharp</guid>
  <pubDate>Mon, 13 Apr 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>The Future of Tech Blogging in the Age of AI</title>
  <link>https://markheath.net/post/2026/4/1/future-of-tech-blogging</link>
  <description>&lt;p&gt;I've been blogging on this site for almost 20 years now, and the majority of my posts are simple coding tutorials, where I share what I've learned as I explore various new technologies (my journey on this blog has taken me through Silverlight, WPF, IronPython, Mercurial, LINQ, F#, Azure, and much more).&lt;/p&gt;
&lt;p&gt;My process has always been quite simple. First, I work through a technical challenge and eventually get something working. And then, I write some instructions for how to do it.&lt;/p&gt;
&lt;h2 id="benefits-of-tech-blogging"&gt;Benefits of tech blogging&lt;/h2&gt;
&lt;p&gt;There are many benefits to sharing your progress like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The process of putting it into writing helps solidify what you learned&lt;/li&gt;
&lt;li&gt;Despite this I still often forget how I achieved something, so my blog functions as a journal I can refer back to later&lt;/li&gt;
&lt;li&gt;You're supporting the wider developer community by sharing proven ways to get something working&lt;/li&gt;
&lt;li&gt;Thanks to &amp;quot;Cunningham's Law&amp;quot; (&lt;em&gt;&amp;quot;the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer.&amp;quot;&lt;/em&gt;), your post may lead you to discover a better way to achieve the same goal, or a fatal flaw in your approach&lt;/li&gt;
&lt;li&gt;And gradually it builds your personal reputation and credibility, as eventually you'll build up visitors (although you may find that your &lt;a href="https://www.markheath.net/post/2016/9/22/customize-radio-button-css"&gt;most popular post of all time&lt;/a&gt; is on the one topic that you most certainly aren't an expert on!)&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="are-llms-going-to-ruin-it-all"&gt;Are LLMs going to ruin it all?&lt;/h2&gt;
&lt;p&gt;But recently I've been wondering - are LLM's going to put an end to coding tutorial blogs like mine? Do they render it all pointless?&lt;/p&gt;
&lt;p&gt;For starters, GitHub Copilot and Claude Code have already dramatically changed the way I go about exploring a new technique or technology. Instead of slogging through Bicep documentation, and endlessly debugging why my template didn't work, I now just ask the AI model to create one for me.&lt;/p&gt;
&lt;p&gt;Refreshingly, I notice that it gets it wrong just as frequently as I do, but it doesn't get frustrated - it just keeps battling away until eventually it gets something working.&lt;/p&gt;
&lt;p&gt;But now it feels like a hollow victory. Is there even any point writing a tutorial about it? If you can simply ask an agent to solve the problem, why would anyone need to read &lt;em&gt;my&lt;/em&gt; tutorial? Are developers even going to bother visiting blogs like mine in the future?&lt;/p&gt;
&lt;p&gt;And then there's the question of who &lt;em&gt;writes&lt;/em&gt; the tutorial? Not only is the agent much quicker than me at solving the technical challenge, it's also significantly faster at writing the tutorial, and undeniably a better writer than me too. So maybe I should just let it write the article for me? But the internet is already full of AI-generated slop...&lt;/p&gt;
&lt;h2 id="should-you-let-ai-write-your-blog-posts"&gt;Should you let AI write your blog posts?&lt;/h2&gt;
&lt;p&gt;This is a deeply polarizing question. There's a number of possible options:&lt;/p&gt;
&lt;h3 id="level-1-human-only"&gt;Level 1: Human only&lt;/h3&gt;
&lt;p&gt;You could insist on hand-writing everything yourself, with strictly no AI assistance. That's what you're reading right now (if you can't already tell from the decidedly mediocre writing style!)&lt;/p&gt;
&lt;p&gt;This mirrors a big debate going on in the world of music production at the moment. If AI tools like Suno can generate an entire song from a single prompt, that sounds far more polished than anything I've ever managed to produce, then does that spell the end of real humans writing and recording songs? And should we fight against it or just embrace it as the future?&lt;/p&gt;
&lt;p&gt;I think tech tutorials do fall into a different category to music though. If I want to learn how to achieve X with technology Y, I just want clear, concise and correct instructions - and I'm not overly bothered whether it came 100% from a human mind or not.&lt;/p&gt;
&lt;p&gt;Having said that, we've already identified a key benefit of writing your own tutorials: it helps solidify what you've learned. Doing your own writing will also improve your own powers of communication. For those reasons alone I have no intention of delegating all my blog writing to LLMs.&lt;/p&gt;
&lt;h3 id="level-2-human-writes-ai-refines"&gt;Level 2: Human writes, AI refines&lt;/h3&gt;
&lt;p&gt;On the other hand, it seems churlish to refuse to take advantage of the benefits of LLM for proof reading, fact checking, and stylistic improvements. When I posted recently about &lt;a href="https://markheath.net/post/2026/3/30/does-code-quality-still-matter"&gt;does code quality still matter&lt;/a&gt; this is exactly what I did. I wrote the post myself, and then asked Claude Code to help me refine it, by critiquing my thoughts and providing counter-arguments.&lt;/p&gt;
&lt;p&gt;To be honest, I ignored most of the feedback, but undoubtedly it improved the final article. This is the approach I've been taking with my Pluralsight course scripts - I first write the whole thing myself, and then ask an LLM to take me to task and tell me all the things I got wrong. (Although they're still ridiculously sycophantic and tell me it's the greatest thing they've ever read on the topic of lazy loading!)&lt;/p&gt;
&lt;h3 id="level-3-ai-writes-human-refines"&gt;Level 3: AI writes, human refines&lt;/h3&gt;
&lt;p&gt;But of course, my time is at a premium. A blog tutorial often takes me well over two hours to write. That's a big time investment for something that will likely barely be read by anyone.&lt;/p&gt;
&lt;p&gt;And if all I'm producing is a tutorial, perhaps it would be better for me to get the LLM to do the leg-work of creating the structure and initial draft, and then I can edit afterwards, adapting the language to sound a bit more in my voice, and deleting some of the most egregious AI-speak.&lt;/p&gt;
&lt;p&gt;That's exactly what I tried with a recent post on &lt;a href="https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints"&gt;private endpoints&lt;/a&gt;. Claude Code not only created the Bicep and test application, but once it was done I got it to write up the instructions and even create a GitHub repo of sample code. The end result was far more thorough than I would have managed myself, and although I read the whole thing carefully and edited it a bit, I have to admit that most of the time I couldn't think of better ways to phrase each sentence, so a lot of it ended up unchanged.&lt;/p&gt;
&lt;p&gt;That left a bad taste in my mouth to be honest. If I do that too often will I lose credibility and scare away readers? And yet I do feel like it was a genuinely valuable article that shows how to solve a problem that I'd been wanting to blog about for a long time.&lt;/p&gt;
&lt;h3 id="level-4-ai-only"&gt;Level 4: AI only&lt;/h3&gt;
&lt;p&gt;Of course, there is a level further, and now we are getting to the dark side. Could I ask Claude or ChatGPT to write me a blog post and just publish it without even reading it myself? I could instruct it to mimic my writing style, and it might even do a good enough job to go unnoticed? Maybe at some time in the future, Claude can dethrone my most popular article with one it wrote entirely itself.&lt;/p&gt;
&lt;p&gt;To be honest, I have no interest in doing that at all - it undermines the &lt;em&gt;purpose&lt;/em&gt; of this blog which is a way for &lt;em&gt;me&lt;/em&gt; to share the things that &lt;em&gt;I&lt;/em&gt; have learned. So I can assure you I have no intention of filling this site up with &amp;quot;slop&amp;quot; articles where the LLM has come up with the idea, written and tested the code, and published the article all without me having to be involved at all.&lt;/p&gt;
&lt;p&gt;But interestingly, this approach might make sense for back-filling the documentation for my open-source project &lt;a href="https://github.com/naudio/NAudio/"&gt;NAudio&lt;/a&gt;. Over the years I've written close to one hundred tutorials but there are still major gaps in the documentation.&lt;/p&gt;
&lt;p&gt;I'm thinking of experimenting with asking Claude Code to write a short tutorial for every public class in the NAudio repo, and to then check its work by following the tutorial and making sure it really works.&lt;/p&gt;
&lt;p&gt;I expect we're going to see an explosion of this approach to, and it could be a genuine positive for the open source community, where documentation is often lacking and outdated. If LLMs are to make a positive contribution to the world of coding tutorials, this is probably one of the best ways they can be utilized.&lt;/p&gt;
&lt;h2 id="why-tech-blogging-still-matters"&gt;Why tech blogging still matters&lt;/h2&gt;
&lt;p&gt;If you're still with me at this point, well done - I know I've gone on too long. Even humans can be as long-winded as LLMs sometimes. But the process of writing down my thoughts on this issue has helped me gain some clarity, and made me realise that it doesn't necessarily matter whether or not I take an AI-free, AI-assisted or even a AI-first approach to my posts.&lt;/p&gt;
&lt;p&gt;The value of sharing these coding tutorials is that the problems I'm solving are &lt;em&gt;real-world problems&lt;/em&gt;. They are tasks that I genuinely needed to accomplish, and came with unique constraints and requirements that are specific to my circumstances. That gives them an authenticity that an AI can't fake. At best it can guess at what humans might want to achieve, and create a tutorials about that.&lt;/p&gt;
&lt;p&gt;So when I'm reading your tech blog (which I hope you'll share a link to), I won't really care whether or not you used ChatGPT to create the sample code, or make you sound like a Pulitzer prize winner. I'll be interested because you're sharing &lt;em&gt;your&lt;/em&gt; experience of how you solved a problem using the tools at your disposal.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/4/1/future-of-tech-blogging</guid>
  <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Securing Back-end App Service Web Apps with Private Endpoints</title>
  <link>https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints</link>
  <description>&lt;p&gt;Back in 2019, I wrote about &lt;a href="https://markheath.net/post/2019/5/24/securing-backend-appservice-webapps"&gt;securing back-end App Service web apps using VNets and Service Endpoints&lt;/a&gt;. That approach worked well at the time, but Azure has moved on significantly since then. In this post, I'll show the modern way to achieve the same thing using &lt;strong&gt;Private Endpoints&lt;/strong&gt; — which is now Microsoft's recommended approach.&lt;/p&gt;
&lt;h2 id="the-problem"&gt;The Problem&lt;/h2&gt;
&lt;p&gt;The scenario is the same as before. You have a front-end web app and a back-end API, both hosted on Azure App Service. End users need to reach the front-end, but the back-end should only be callable from the front-end. No one on the public internet should be able to reach it directly.&lt;/p&gt;
&lt;pre class="mermaid"&gt;flowchart LR
    User([Internet User]) --&gt;|✅ allowed| FE[Frontend Web App]
    FE --&gt;|✅ allowed| BE[Backend API]
    User --&gt;|❌ blocked| BE
&lt;/pre&gt;
&lt;p&gt;This is a standard multi-tier architecture. With VMs or containers in a VNet, you'd simply not expose a public endpoint for the back-end. But App Service web apps have always had public endpoints by default — and until recently, locking them down was either fiddly (Service Endpoints) or expensive (App Service Environments).&lt;/p&gt;
&lt;h2 id="what-changed-since-2019"&gt;What Changed Since 2019?&lt;/h2&gt;
&lt;p&gt;My 2019 approach used three features together:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;VNet Integration&lt;/strong&gt; — route the front-end's outbound traffic through a VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Service Endpoints&lt;/strong&gt; — set up routing so App Service traffic flows through the VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Access Restrictions&lt;/strong&gt; — whitelist the VNet subnet on the back-end&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This worked, but had some limitations. Service Endpoints don't prevent data exfiltration (traffic is scoped to the entire App Service, not your specific app), and the Access Restrictions approach required some tricky Azure CLI commands to set up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Private Endpoints&lt;/strong&gt; are now the recommended replacement. Microsoft's documentation is &lt;a href="https://learn.microsoft.com/en-us/azure/virtual-network/vnet-integration-for-azure-services#compare-private-endpoints-and-service-endpoints"&gt;explicit about this&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Microsoft recommends using Azure Private Link. Private Link offers better capabilities for privately accessing PaaS from on-premises, provides built-in data-exfiltration protection, and maps services to private IPs in your own network.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's what makes Private Endpoints better:&lt;/p&gt;
&lt;table class="md-table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Service Endpoints (2019)&lt;/th&gt;
&lt;th&gt;Private Endpoints (2026)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scope&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entire App Service&lt;/td&gt;
&lt;td&gt;Your specific app only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data exfiltration protection&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Public access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Still reachable (blocked by rules)&lt;/td&gt;
&lt;td&gt;Blocked (access restrictions + Private Endpoint)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;On-premises access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (via VPN/ExpressRoute)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Moderate (fiddly CLI commands)&lt;/td&gt;
&lt;td&gt;Straightforward (Bicep)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;~$8/month&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The small cost is well worth it for the significantly stronger security posture.&lt;/p&gt;
&lt;h2 id="architecture-overview"&gt;Architecture Overview&lt;/h2&gt;
&lt;p&gt;Here's what we're going to build:&lt;/p&gt;
&lt;pre class="mermaid"&gt;flowchart TB
    subgraph Internet
        User([Internet User])
    end

    subgraph Azure
        subgraph VNet ["Virtual Network (10.0.0.0/16)"]
            subgraph IntSub ["integration-subnet (10.0.0.0/24)"]
            end
            subgraph PeSub ["pe-subnet (10.0.1.0/24)"]
                PE[Private Endpoint&amp;lt;br/&gt;10.0.1.4]
            end
        end

        FE[Frontend Web App&amp;lt;br/&gt;public access]
        BE[Backend API&amp;lt;br/&gt;main site blocked]
        DNS[Private DNS Zone&amp;lt;br/&gt;privatelink.azurewebsites.net]
    end

    User --&gt;|HTTPS| FE
    FE -.-&gt;|VNet Integration| IntSub
    IntSub --&gt;|private network| PE
    PE --&gt;|Private Link| BE
    DNS -.-&gt;|resolves backend&amp;lt;br/&gt;to 10.0.1.4| VNet
    User -.-&gt;|❌ 403 Forbidden| BE

    style BE fill:#f96,stroke:#333
    style FE fill:#6f9,stroke:#333
    style PE fill:#69f,stroke:#333
&lt;/pre&gt;
&lt;p&gt;The key components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VNet&lt;/strong&gt; with two subnets: one for VNet Integration (delegated to App Service), one for the Private Endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Frontend Web App&lt;/strong&gt; — publicly accessible, with VNet Integration so its outbound traffic goes through the VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backend API&lt;/strong&gt; — main site blocked by access restrictions, reachable only via Private Endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Private DNS Zone&lt;/strong&gt; — resolves &lt;code&gt;backend-xxx.azurewebsites.net&lt;/code&gt; to the private IP within the VNet&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When the front-end calls the back-end, DNS resolution within the VNet returns the private IP (e.g. &lt;code&gt;10.0.1.4&lt;/code&gt;), and traffic flows through the Microsoft backbone via Private Link. Anyone on the public internet trying to reach the back-end gets a &lt;code&gt;403 Forbidden&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="the-sample-apps"&gt;The Sample Apps&lt;/h2&gt;
&lt;p&gt;I used Claude Code to help me create two minimal ASP.NET Core (.NET 10) apps to demonstrate this. The back-end is a simple API:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet(&amp;quot;/api/greeting&amp;quot;, () =&amp;gt; new
{
    message = &amp;quot;Hello from the secure backend!&amp;quot;,
    timestamp = DateTime.UtcNow
});

app.MapGet(&amp;quot;/health&amp;quot;, () =&amp;gt; Results.Ok(&amp;quot;Healthy&amp;quot;));

app.Run();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The front-end is a Razor Pages app that calls the back-end. The key part is the &lt;code&gt;HttpClient&lt;/code&gt; setup in &lt;code&gt;Program.cs&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;builder.Services.AddHttpClient(&amp;quot;BackendApi&amp;quot;, client =&amp;gt;
{
    var baseUrl = builder.Configuration[&amp;quot;BackendApi:BaseUrl&amp;quot;]
        ?? &amp;quot;http://localhost:5100&amp;quot;;
    client.BaseAddress = new Uri(baseUrl);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the page model that calls it:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;public async Task OnGetAsync()
{
    try
    {
        var client = _httpClientFactory.CreateClient(&amp;quot;BackendApi&amp;quot;);
        var response = await client.GetAsync(&amp;quot;/api/greeting&amp;quot;);
        response.EnsureSuccessStatusCode();

        var json = await response.Content.ReadFromJsonAsync&amp;lt;JsonElement&amp;gt;();
        GreetingMessage = json.GetProperty(&amp;quot;message&amp;quot;).GetString();
        GreetingTimestamp = json.GetProperty(&amp;quot;timestamp&amp;quot;).GetString();
    }
    catch (Exception ex)
    {
        ErrorMessage = $&amp;quot;Failed to reach backend: {ex.Message}&amp;quot;;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All pretty straightforward. The backend security is all handled by the infrastructure.&lt;/p&gt;
&lt;h2 id="the-bicep-template"&gt;The Bicep Template&lt;/h2&gt;
&lt;p&gt;This is where the interesting stuff happens. Lets look at the key resources.&lt;/p&gt;
&lt;h3 id="virtual-network"&gt;Virtual Network&lt;/h3&gt;
&lt;p&gt;We need a VNet with two subnets. The integration subnet is delegated to &lt;code&gt;Microsoft.Web/serverFarms&lt;/code&gt; (required for VNet Integration). The private endpoint subnet has &lt;code&gt;privateEndpointNetworkPolicies&lt;/code&gt; set to &lt;code&gt;Disabled&lt;/code&gt; (required for Private Endpoints).&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource vnet 'Microsoft.Network/virtualNetworks@2024-05-01' = {
  name: vnetName
  location: location
  properties: {
    addressSpace: {
      addressPrefixes: ['10.0.0.0/16']
    }
    subnets: [
      {
        name: 'integration-subnet'
        properties: {
          addressPrefix: '10.0.0.0/24'
          delegations: [{
            name: 'delegation'
            properties: {
              serviceName: 'Microsoft.Web/serverFarms'
            }
          }]
        }
      }
      {
        name: 'pe-subnet'
        properties: {
          addressPrefix: '10.0.1.0/24'
          privateEndpointNetworkPolicies: 'Disabled'
        }
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="app-service-plan-and-web-apps"&gt;App Service Plan and Web Apps&lt;/h3&gt;
&lt;p&gt;Both apps share a single Linux B1 App Service Plan. The front-end has &lt;code&gt;virtualNetworkSubnetId&lt;/code&gt; set to the integration subnet, which routes its outbound traffic through the VNet.&lt;/p&gt;
&lt;p&gt;For the back-end, you might think we'd just set &lt;code&gt;publicNetworkAccess: 'Disabled'&lt;/code&gt;. That does work for blocking internet traffic, but it also blocks the SCM/Kudu deployment endpoint — meaning you can't deploy your code with &lt;code&gt;az webapp deploy&lt;/code&gt; any more. Instead, we use access restrictions: &lt;code&gt;ipSecurityRestrictionsDefaultAction: 'Deny'&lt;/code&gt; blocks all public traffic to the main site, while &lt;code&gt;scmIpSecurityRestrictionsUseMain: false&lt;/code&gt; with &lt;code&gt;scmIpSecurityRestrictionsDefaultAction: 'Allow'&lt;/code&gt; keeps the deployment endpoint accessible. The Private Endpoint ensures the front-end can still reach the back-end over the private network.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource appServicePlan 'Microsoft.Web/serverfarms@2024-04-01' = {
  name: appServicePlanName
  location: location
  kind: 'linux'
  sku: { name: 'B1' }
  properties: { reserved: true }
}

resource backendApp 'Microsoft.Web/sites@2024-04-01' = {
  name: backendAppName
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    publicNetworkAccess: 'Enabled'
    siteConfig: {
      linuxFxVersion: 'DOTNETCORE|10.0'
      ipSecurityRestrictionsDefaultAction: 'Deny'
      scmIpSecurityRestrictionsUseMain: false
      scmIpSecurityRestrictionsDefaultAction: 'Allow'
    }
  }
}

resource frontendApp 'Microsoft.Web/sites@2024-04-01' = {
  name: frontendAppName
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    virtualNetworkSubnetId: vnet.properties.subnets[0].id
    siteConfig: {
      linuxFxVersion: 'DOTNETCORE|10.0'
      appSettings: [{
        name: 'BackendApi__BaseUrl'
        value: 'https://${backendAppName}.azurewebsites.net'
      }]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that our approach leaves the SCM/Kudu deployment endpoint publicly accessible (it's authenticated, so the risk is low). If you want to eliminate that surface area entirely, you could set &lt;code&gt;publicNetworkAccess: 'Disabled'&lt;/code&gt; and use an alternative deployment method that bypasses Kudu — for example, run-from-package with &lt;code&gt;WEBSITE_RUN_FROM_PACKAGE&lt;/code&gt; pointing at a blob storage URL, or containerizing your app and pulling from ACR. Both approaches mean the backend never needs a public endpoint at all, though you may need to add VNet integration to the backend for outbound access to the storage account or registry if those are private too.&lt;/p&gt;
&lt;h3 id="private-endpoint-and-dns"&gt;Private Endpoint and DNS&lt;/h3&gt;
&lt;p&gt;The Private Endpoint creates a network interface in the PE subnet that's connected to the back-end app. The Private DNS Zone ensures that &lt;code&gt;backend-xxx.azurewebsites.net&lt;/code&gt; resolves to the private IP when queried from within the VNet.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource privateEndpoint 'Microsoft.Network/privateEndpoints@2024-05-01' = {
  name: 'pe-${backendAppName}'
  location: location
  properties: {
    subnet: { id: vnet.properties.subnets[1].id }
    privateLinkServiceConnections: [{
      name: 'pe-${backendAppName}'
      properties: {
        privateLinkServiceId: backendApp.id
        groupIds: ['sites']
      }
    }]
  }
}

resource privateDnsZone 'Microsoft.Network/privateDnsZones@2024-06-01' = {
  name: 'privatelink.azurewebsites.net'
  location: 'global'
}

resource dnsZoneLink 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2024-06-01' = {
  parent: privateDnsZone
  name: '${vnetName}-link'
  location: 'global'
  properties: {
    virtualNetwork: { id: vnet.id }
    registrationEnabled: false
  }
}

resource dnsZoneGroup 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2024-05-01' = {
  parent: privateEndpoint
  name: 'default'
  properties: {
    privateDnsZoneConfigs: [{
      name: 'privatelink-azurewebsites-net'
      properties: { privateDnsZoneId: privateDnsZone.id }
    }]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="deploying"&gt;Deploying&lt;/h2&gt;
&lt;p&gt;I've created a PowerShell deployment script that uses the Azure CLI. Here are the key steps:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;# Create the resource group
az group create -n SecureBackendDemo -l uksouth

# Deploy the Bicep template
az deployment group create `
    -g SecureBackendDemo `
    --template-file ./infra/main.bicep
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Bicep deployment creates all the networking and App Service resources. After that, we publish and deploy both .NET apps:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;# Build and publish
dotnet publish src/Backend/Backend.csproj -c Release -o publish/backend
dotnet publish src/Frontend/Frontend.csproj -c Release -o publish/frontend

# Package as zip
Compress-Archive -Path &amp;quot;publish/backend/*&amp;quot; -DestinationPath publish/backend.zip
Compress-Archive -Path &amp;quot;publish/frontend/*&amp;quot; -DestinationPath publish/frontend.zip

# Deploy to App Service
az webapp deploy -g SecureBackendDemo -n $backendAppName --src-path publish/backend.zip --type zip
az webapp deploy -g SecureBackendDemo -n $frontendAppName --src-path publish/frontend.zip --type zip
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The full deployment script is in the &lt;a href="https://github.com/markheath/securing-backend-appservices-private-endpoints/blob/main/deploy/deploy.ps1"&gt;repository&lt;/a&gt; — just run &lt;code&gt;.\deploy\deploy.ps1&lt;/code&gt; and it handles everything.&lt;/p&gt;
&lt;h2 id="testing"&gt;Testing&lt;/h2&gt;
&lt;p&gt;Once deployed, we can verify the security is working:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Test 1: Frontend is accessible and shows the backend greeting&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;$response = Invoke-WebRequest -Uri $frontendUrl -UseBasicParsing
# Should return 200 with &amp;quot;Hello from the secure backend!&amp;quot; in the HTML
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Test 2: Backend is NOT accessible from the internet&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;Invoke-WebRequest -Uri &amp;quot;$backendUrl/api/greeting&amp;quot; -UseBasicParsing
# Should return 403 Forbidden
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The test script (&lt;code&gt;deploy/test.ps1&lt;/code&gt;) automates both checks.&lt;/p&gt;
&lt;h2 id="cost-breakdown"&gt;Cost Breakdown&lt;/h2&gt;
&lt;p&gt;Here's what this setup costs beyond the App Service Plan itself:&lt;/p&gt;
&lt;table class="md-table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Monthly Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Virtual Network&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VNet Integration&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private Endpoint&lt;/td&gt;
&lt;td&gt;~$7.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private DNS Zone&lt;/td&gt;
&lt;td&gt;~$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS queries&lt;/td&gt;
&lt;td&gt;~$0.40 per million&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total networking overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$8/month&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Compare that to alternatives like Application Gateway (&lt;sub&gt;$200/month), APIM (&lt;/sub&gt;$300/month), or an App Service Environment (~$1,000/month). For simple back-end lockdown scenarios, Private Endpoints are by far the most cost-effective option.&lt;/p&gt;
&lt;h2 id="cleaning-up"&gt;Cleaning Up&lt;/h2&gt;
&lt;p&gt;Since everything is in a single resource group, cleanup is one command:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;az group delete -n SecureBackendDemo --yes --no-wait
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Private Endpoints have replaced Service Endpoints as the recommended way to secure back-end App Services. The setup is more straightforward (especially with Bicep), the security is stronger (true private IP, data exfiltration protection), and the cost is minimal (~$8/month). If you're still using the Service Endpoints approach from my 2019 post, it's worth upgrading.&lt;/p&gt;
&lt;p&gt;The complete source code for this demo — including the .NET apps, Bicep template, and deployment scripts — is &lt;a href="https://github.com/markheath/securing-backend-appservices-private-endpoints"&gt;available on GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Azure</category>
  <category>App Service</category>
  <category>Azure CLI</category>
  <category>Bicep</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints</guid>
  <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Does Code Quality Still Matter in the Age of AI-Assisted Coding?</title>
  <link>https://markheath.net/post/2026/3/30/does-code-quality-still-matter</link>
  <description>&lt;p&gt;I'm increasingly hearing the sentiment that now AI models can write code for us, we no longer need to concern ourselves with concepts like &amp;quot;clean code&amp;quot;, eliminating code smells, following SOLID principles etc. All of these concerns, it's argued, are purely an attempt to make the codebase more comprehensible for &lt;em&gt;humans&lt;/em&gt;. But if humans are no longer reading the code, what does it matter? The only thing we should care about is whether the code &lt;em&gt;works&lt;/em&gt; correctly or not.&lt;/p&gt;
&lt;p&gt;I can partially understand this perspective. One great strength of AI agents is that they never tire. You can ask them to work on a &amp;quot;&lt;a href="https://www.geeksforgeeks.org/system-design/big-ball-of-mud-anti-pattern/"&gt;big ball of mud&lt;/a&gt;&amp;quot; and they won't complain. They don't mind if it's a giant convoluted monolith or an over-engineered set of microservices spread across multiple repos. They will just keep searching around in the code until they eventually find the bit they need to change.&lt;/p&gt;
&lt;p&gt;However, I think that this is a mistake - even if we grant that we don't need code to be &amp;quot;human readable&amp;quot; any more (which I'm also not convinced of - I still find it very useful to check in on how an agent is going about tackling a particular problem). Let me give just a few quick reasons why following these &amp;quot;traditional&amp;quot; coding guidelines still matters.&lt;/p&gt;
&lt;h2 id="finding-the-right-place"&gt;Finding the right place&lt;/h2&gt;
&lt;p&gt;The first thing a coding agent needs to do when fixing a bug or adding a new feature, is to determine where in the codebase that change should be made. This involves searching, and if you look at the model's reasoning steps and tool calls you can see what it searches for (spoiler alert: it's mostly just grepping for words it thinks might be relevant).&lt;/p&gt;
&lt;p&gt;This has several implications. First, it means that if our naming is weird or inconsistent, it will require more attempts to find the right place, slowing your agent's progress considerably.&lt;/p&gt;
&lt;p&gt;Second, it means that it is quite possible that it will miss some relevant portions of the codebase. The &amp;quot;&lt;a href="https://en.wikipedia.org/wiki/Shotgun_surgery"&gt;shotgun surgery&lt;/a&gt;&amp;quot; antipattern is where you need to modify many different files to implement a single feature. It's often the result of copy and pasted code, or just poor architectural decisions that don't organize key responsibilities or cross-cutting concerns into a single place. When you have code like this, the chances of your agent successfully finding all the places that need to be modified are greatly diminished.&lt;/p&gt;
&lt;p&gt;Then, there's a context window size problem. In an ideal world, the agent reads the entire codebase in one go and can reason about the whole thing as a unified whole. But that's simply not how they work at the moment, partly because the context windows aren't large enough (despite some recent models having a 1M token context window), and partly because the quality of model output tends to degrade, the longer your session grows.&lt;/p&gt;
&lt;p&gt;This means, for example, that following the &amp;quot;Single Responsibility Principle&amp;quot; will greatly help the model. Once it's found the single class that is relevant to the task at hand, it can read it all, without polluting the context window with lots of additional code that's irrelevant to the task at hand.&lt;/p&gt;
&lt;p&gt;So a well-organized, modular codebase, with well-named functions and classes is going to greatly enhance the effectiveness of an AI agent working on that project, increasing its chances of quickly finding the right place to edit.&lt;/p&gt;
&lt;p&gt;The cost aspect of this should not be underestimated. These agents can quickly burn through very large amounts of tokens, and it does seem that many of the subscription models are unsustainably subsidised at the moment.&lt;/p&gt;
&lt;p&gt;This means that in the (perhaps very near) future, we'll all be thinking a lot harder about how to make our agents read less code and perform fewer tool calls. The fact that each agent session starts out fresh means that it often has to spend time re-learning things it previously discovered. Already we are seeing many projects designed to address this problem (e.g. I just stumbled across &lt;a href="https://github.com/theDakshJaitly/mex"&gt;this one&lt;/a&gt; today)&lt;/p&gt;
&lt;h2 id="its-not-just-the-how-but-the-what-and-why"&gt;It's not just the how but the what and why&lt;/h2&gt;
&lt;p&gt;Code is instructions to the computer about what it should do. It expresses the &amp;quot;how&amp;quot; but not &amp;quot;what&amp;quot; or &amp;quot;why&amp;quot;. That's why good class and method names and code comments are important. They provide valuable additional context to the human reading it so they can understand the &lt;em&gt;intent&lt;/em&gt; of the code. This contextual information is just as relevant to agents who need to make connections between the natural language instructions that you provide them, and the concepts found in the codebase.&lt;/p&gt;
&lt;h2 id="the-best-way-vs-the-quickest-way"&gt;The best way vs the quickest way&lt;/h2&gt;
&lt;p&gt;AI agents are very goal-oriented. Ask them to fix a bug or to add a feature and they will find a way to do it. Unless you explicitly instruct them to, they won't push back on the request, or propose alternative better strategies.&lt;/p&gt;
&lt;p&gt;When a human developer is fixing a bug, they will often take a step back and ask whether this bug is actually an example of a wider category of problems. So we might actually &lt;em&gt;increase&lt;/em&gt; the scope of the task at hand in order to prevent many similar issues in the future.&lt;/p&gt;
&lt;p&gt;I'm increasingly seeing the idea that we could set up an automated process whereby every time an issue is raised on your GitHub repo, an agent triages it, attempts to fix it, creates and merges the PR. This is of course incredibly appealing - imagine if 90% of bugs were just automatically fixed within hours of being reported.&lt;/p&gt;
&lt;p&gt;But unless this &amp;quot;bigger picture&amp;quot; thinking can also be baked into the fixing process, this approach could result in the classic &amp;quot;technical debt&amp;quot; problem where every issue is resolved in the &amp;quot;quickest way&amp;quot; without regard to the longer-term maintainability implications.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Code quality still matters for any codebase that you plan to improve and maintain long-term. Even if humans don't have to suffer the pain of reading poorly architected codebases, the effectiveness of AI agents can be significantly hindered by allowing structure to degrade. Investing in code quality (even if its just instructing the agents to do some rounds of cleanup and improvements after each task) will provide a stronger foundation for future development.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>SOLID</category>
  <category>Code Smells</category>
  <category>Clean Code</category>
  <category>Technical Debt</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/3/30/does-code-quality-still-matter</guid>
  <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Protecting Against Concurrent Updates in Azure Blob Storage with ETags</title>
  <link>https://markheath.net/post/2026/2/9/azure-blob-storage-etag-concurrency</link>
  <description>&lt;p&gt;I recently had to deal with a situation where there was potential for multiple processes to attempt to modify the same Azure blob at the same time.&lt;/p&gt;
&lt;p&gt;By default, if two processes read the same Azure blob and then both try to write updated content back, one of them will silently overwrite the other's changes. Fortunately, Azure Blob Storage provides a built-in mechanism to prevent this, called ETags. An ETag is simply a version token that changes every time a blob is modified. By passing the ETag you read back as a condition on your write, you can tell Azure to &amp;quot;only accept this update if nobody else has changed the blob since I last read it.&amp;quot; If someone else got there first, Azure returns a &lt;code&gt;412 Precondition Failed&lt;/code&gt; and you can retry with fresh data.&lt;/p&gt;
&lt;p&gt;Let's take a look at how to implement an optimistic concurrency pattern using ETags in C#.&lt;/p&gt;
&lt;h2 id="setting-up-the-clients"&gt;Setting Up the Clients&lt;/h2&gt;
&lt;p&gt;First, let's get connected to the storage account using the convenient &lt;code&gt;DefaultAzureCredential&lt;/code&gt; to avoid hard-coding any keys.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;var serviceUri = new Uri(&amp;quot;https://youraccountname.blob.core.windows.net/&amp;quot;);
var credential = new DefaultAzureCredential();
var blobServiceClient = new BlobServiceClient(serviceUri, credential);
var containerClient = blobServiceClient.GetBlobContainerClient(&amp;quot;your-container&amp;quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You'll need to reference the &lt;code&gt;Azure.Identity&lt;/code&gt; and &lt;code&gt;Azure.Storage.Blobs&lt;/code&gt; NuGet packages.&lt;/p&gt;
&lt;h2 id="fetching-the-blob-and-its-etag"&gt;Fetching the Blob and Its ETag&lt;/h2&gt;
&lt;p&gt;The crucial step is to retrieve the ETag alongside the blob content. Here I've made a simple helper called &lt;code&gt;DownloadContentAsync&lt;/code&gt; that returns both in one call:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task&amp;lt;(string, ETag)&amp;gt; FetchContentsAsync(BlobClient blobClient)
{
    try
    {
        var content = await blobClient.DownloadContentAsync();
        return (content.Value.Content.ToString(), content.Value.Details.ETag);
    }
    catch (RequestFailedException ex) when (ex.Status == 404)
    {
        // Blob doesn't exist yet; return empty content and a default (empty) ETag
        return (string.Empty, default);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="writing-back-with-an-etag-condition"&gt;Writing Back with an ETag Condition&lt;/h2&gt;
&lt;p&gt;Now that we've retrieved the existing blob contents, let's imagine that we've updated them and now we want to re-upload.&lt;/p&gt;
&lt;p&gt;Before uploading, we need set a &lt;code&gt;BlobRequestConditions&lt;/code&gt; on the upload options. There are two cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Blob didn't exist&lt;/strong&gt; (&lt;code&gt;etag == default&lt;/code&gt;): use &lt;code&gt;IfNoneMatch = new ETag(&amp;quot;*&amp;quot;)&lt;/code&gt; so the upload only succeeds if the blob still doesn't exist.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Blob already exists&lt;/strong&gt;: use &lt;code&gt;IfMatch = etag&lt;/code&gt; so the upload only succeeds if the blob's current ETag still matches the one we read.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If the condition fails, Azure returns &lt;code&gt;412 Precondition Failed&lt;/code&gt; and the SDK throws a &lt;code&gt;RequestFailedException&lt;/code&gt;. We catch that and return &lt;code&gt;false&lt;/code&gt; to signal a conflict.&lt;/p&gt;
&lt;p&gt;Again I've created a simple helper method &lt;code&gt;UpdateContentsAsync&lt;/code&gt; to show how we can do this and detect the concurrency issue.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task&amp;lt;bool&amp;gt; UpdateContentsAsync(BlobClient blobClient, string contents, ETag etag)
{
    var uploadOptions = new BlobUploadOptions
    {
        Conditions = etag == default
            // Blob didn't exist: only create if still absent
            ? new BlobRequestConditions { IfNoneMatch = new ETag(&amp;quot;*&amp;quot;) }
            // Blob existed: only overwrite if ETag still matches
            : new BlobRequestConditions { IfMatch = etag }
    };

    try
    {
        await blobClient.UploadAsync(BinaryData.FromString(contents), uploadOptions);
        return true;
    }
    catch (RequestFailedException ex) when (ex.Status == 412 || ex.ErrorCode == BlobErrorCode.ConditionNotMet)
    {
        // Another writer changed the blob between our read and write
        return false;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="retrying-with-exponential-backoff"&gt;Retrying with Exponential Backoff&lt;/h2&gt;
&lt;p&gt;A conflict just means someone else updated the blob first, so we don't need to give up. Instead we can just fetch the latest version and try again. To avoid a situation of too many processes all trying to update at the same time, we can back off exponentially and add a small random jitter to each delay.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task ModifyBlob(
    BlobContainerClient container,
    string blobName,
    Func&amp;lt;string, Task&amp;lt;string&amp;gt;&amp;gt; transform,
    CancellationToken ct)
{
    ArgumentNullException.ThrowIfNull(transform);

    var maxRetries = 5;
    var attempt = 0;
    var delay = TimeSpan.FromSeconds(2);
    var blobClient = container.GetBlobClient(blobName);

    while (attempt &amp;lt; maxRetries)
    {
        ct.ThrowIfCancellationRequested();
        attempt++;

        var (contents, etag) = await FetchContentsAsync(blobClient);
        var newContents = await transform(contents);

        if (await UpdateContentsAsync(blobClient, newContents, etag))
            return; // success

        // Back off before retrying
        var jitterMs = Random.Shared.Next(0, 100);
        await Task.Delay(delay + TimeSpan.FromMilliseconds(jitterMs), ct);

        // Exponential backoff, capped at 5 seconds
        delay = TimeSpan.FromMilliseconds(Math.Min(delay.TotalMilliseconds * 2, 5_000));
    }

    throw new InvalidOperationException(
        $&amp;quot;Failed to update blob '{blobName}' after {maxRetries} attempts.&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;transform&lt;/code&gt; delegate receives the current blob content and returns the new content. &lt;code&gt;ModifyBlob&lt;/code&gt; handles all the retry logic so callers don't need to think about ETags at all.&lt;/p&gt;
&lt;h2 id="seeing-concurrency-conflicts-in-action"&gt;Seeing Concurrency Conflicts in Action&lt;/h2&gt;
&lt;p&gt;To check this actually works we can make a simple modification to simulate two concurrent updaters. The outer transform, before writing its own change, triggers an inner call to modify the blob that successfully commits first. When control returns to the outer call its ETag is now stale, so the first attempt fails and the retry loop kicks in.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;bool firstTime = true;

await ModifyBlobTest(containerClient, blobName, async currentContent =&amp;gt;
{
    if (firstTime)
    {
        // While the outer call holds its ETag, the inner call commits a change,
        // invalidating the outer ETag.
        await ModifyBlobTest(
            containerClient, blobName,
            c =&amp;gt; Task.FromResult($&amp;quot;{c}\r\nInner update {DateTimeOffset.Now}&amp;quot;),
            CancellationToken.None);
    }
    firstTime = false;
    return $&amp;quot;{currentContent}\r\nOuter conflicting update {DateTimeOffset.Now}&amp;quot;;
}, CancellationToken.None);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On the first pass through the outer loop you'll see output like:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-txt"&gt;Successful update of etag &amp;quot;0x1234...&amp;quot;   ← inner update wins
Concurrency conflict detected. Old ETag: &amp;quot;0x1234...&amp;quot;  ← outer detects stale ETag
Successful update of etag &amp;quot;0x5678...&amp;quot;   ← outer retries and succeeds
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Both updates end up in the blob — neither is lost.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;ETags give you a simple optimistic concurrency mechanism for Azure Blob Storage: read the blob and its ETag together, apply your changes, then write back with a condition that fails if the blob has been modified in the meantime. If you wrap that in a retry loop with exponential backoff and jitter, you have a robust pattern that handles any number of concurrent writers without data loss or locks.&lt;/p&gt;
&lt;p&gt;Obviously in an ideal world you wouldn't be making lots of concurrent updates to blobs, but if you do, you can use the approach shown in the &lt;code&gt;ModifyBlob&lt;/code&gt; helper shown above.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Azure</category>
  <category>.NET</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/2/9/azure-blob-storage-etag-concurrency</guid>
  <pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>EF Core Lazy Loading Performance Gotcha</title>
  <link>https://markheath.net/post/2026/1/8/efcore-lazy-loader-gotcha</link>
  <description>&lt;p&gt;I was recently using EF Core's &lt;code&gt;ILazyLoader&lt;/code&gt; for &lt;a href="https://learn.microsoft.com/en-us/ef/core/querying/related-data/lazy#lazy-loading-without-proxies"&gt;lazy loading without proxies&lt;/a&gt;, and ran into a performance issue that took me by surprise. When you call &lt;code&gt;DbSet&amp;lt;T&amp;gt;.Add()&lt;/code&gt; to add an entity to the context, EF Core immediately injects the lazy loader into your entity even before you've called &lt;code&gt;SaveChangesAsync()&lt;/code&gt;. This means if you navigate to a lazy-loaded navigation property before persisting, EF Core will try to query the database for related entities that don't exist yet.&lt;/p&gt;
&lt;p&gt;It's an unnecessary performance overhead and the fix is fortunately very simple: don't add entities to the DbContext until right before you're ready to call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="the-model"&gt;The Model&lt;/h2&gt;
&lt;p&gt;To understand how it behaves I created a simple example project using a &lt;code&gt;Blog&lt;/code&gt; and &lt;code&gt;Post&lt;/code&gt; relationship with &lt;code&gt;ILazyLoader&lt;/code&gt; injection:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;public class Blog
{
    private ICollection&amp;lt;Post&amp;gt;? _posts;
    private ILazyLoader? _lazyLoader;

    public Blog() {}

    public Blog(ILazyLoader lazyLoader)
    {
        _lazyLoader = lazyLoader;
    }

    public int Id { get; set; }
    public required string Name { get; set; }
    
    public virtual ICollection&amp;lt;Post&amp;gt; Posts
    {
        get =&amp;gt; _lazyLoader?.Load(this, ref _posts) ?? _posts ?? [];
        set =&amp;gt; _posts = value;
    }
}

public class Post
{
    public int Id { get; set; }
    public required string Title { get; set; }
    public required string Content { get; set; }
    public virtual Blog? Blog { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="reproducing-the-problem"&gt;Reproducing The Problem&lt;/h2&gt;
&lt;p&gt;Now let's look at what happens when you add a blog with posts, but navigate into the &lt;code&gt;Posts&lt;/code&gt; collection before persisting to the database:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;using (var context = new BloggingContext())
{
    await context.Database.EnsureCreatedAsync();

    // Create a new Blog with two Posts
    var blog = new Blog
    {
        Name = &amp;quot;Test Blog&amp;quot;,
        Posts =
        [
            new Post { Title = &amp;quot;First Post&amp;quot;, Content = &amp;quot;Hello from EF Core 10!&amp;quot; },
            new Post { Title = &amp;quot;Second Post&amp;quot;, Content = &amp;quot;Another post for testing.&amp;quot; }
        ]
    };

    // This causes EF Core to inject the lazy loader using reflection
    context.Blogs.Add(blog);

    // Accessing blog.Posts triggers the lazy loader to query the database
    // even though this blog hasn't been saved yet!
    Console.WriteLine(&amp;quot;Number of posts: &amp;quot; + blog.Posts.Count);

    await context.SaveChangesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you call &lt;code&gt;context.Blogs.Add(blog)&lt;/code&gt;, EF Core uses reflection to inject an &lt;code&gt;ILazyLoader&lt;/code&gt; instance into the &lt;code&gt;Blog&lt;/code&gt; object. From that point on, any access to &lt;code&gt;blog.Posts&lt;/code&gt; will trigger the lazy loading mechanism. Since the blog doesn't exist in the database yet (no &lt;code&gt;Id&lt;/code&gt; has been assigned), EF Core will execute a query that looks something like:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-sql"&gt;SELECT [p].[Id], [p].[BlogId], [p].[Content], [p].[Title]
FROM [Posts] AS [p]
WHERE [p].[BlogId] = 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is completely pointless - the blog hasn't been persisted, so there can't possibly be any related posts in the database.&lt;/p&gt;
&lt;h2 id="the-solution"&gt;The Solution&lt;/h2&gt;
&lt;p&gt;The fix is straightforward: only add the entity to the context right before you call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;using (var context = new BloggingContext())
{
    await context.Database.EnsureCreatedAsync();

    var blog = new Blog
    {
        Name = &amp;quot;Test Blog&amp;quot;,
        Posts =
        [
            new Post { Title = &amp;quot;First Post&amp;quot;, Content = &amp;quot;Hello from EF Core 10!&amp;quot; },
            new Post { Title = &amp;quot;Second Post&amp;quot;, Content = &amp;quot;Another post for testing.&amp;quot; }
        ]
    };

    // Do all your work with the blog object first
    Console.WriteLine(&amp;quot;Number of posts: &amp;quot; + blog.Posts.Count);

    // Only add to context when you're ready to save
    context.Blogs.Add(blog);
    await context.SaveChangesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now when you access &lt;code&gt;blog.Posts&lt;/code&gt;, there's no lazy loader injected yet, so it just returns the collection you assigned, with no database query needed.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;If you're using &lt;code&gt;ILazyLoader&lt;/code&gt; injection in EF Core, be mindful of when you add entities to the &lt;code&gt;DbContext&lt;/code&gt;. The lazy loader gets injected as soon as you call &lt;code&gt;Add()&lt;/code&gt;, not when you call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;. So if you need to work with navigation properties before persisting, keep the entity disconnected from the context until you're ready to save. This avoids unnecessary database queries.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Entity Framework Core</category>
  <category>.NET</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/1/8/efcore-lazy-loader-gotcha</guid>
  <pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>2025 Year in Review</title>
  <link>https://markheath.net/post/2025/12/31/2025-year-in-review</link>
  <description>&lt;p&gt;Happy Christmas and happy new year! I know it's been a while since I last posted anything here, but thought I'd revive my tradition of writing another &lt;a href="https://markheath.net/category/year%20in%20review"&gt;year in review&lt;/a&gt; post.&lt;/p&gt;
&lt;h3 id="pluralsight"&gt;Pluralsight&lt;/h3&gt;
&lt;p&gt;Part of the reason for me not having as much time for blogging is that I created three new Pluralsight courses this year, &lt;a href="https://www.pluralsight.com/authors/mark-heath"&gt;bringing my total to 29&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First up was &lt;a href="https://www.pluralsight.com/courses/refactor-optimize-code-github-copilot"&gt;a course about refactoring and optimizing code with GitHub Copilot&lt;/a&gt;. Obviously 2025 has been the year where AI as firmly established itself as a day-to-day part of the developer experience, and it's an extremely fast-moving space. AI-assisted coding can be both incredibly impressive and incredibly frustrating. Impressive as it can often write in a few seconds what it would have taken hours or even days to write manually, but frustrating as it can often miss the point of what you're asking or make critical mistakes that cost you almost as much time as you saved. In my course I tried to focus on the basics of how to prompt the AI assistant well, to enable you to get as much benefit out of it as possible, without falling into the pitfall of losing control of your codebase and ending up with a vibe-coded mess.&lt;/p&gt;
&lt;p&gt;Next up was two courses about microservices, which essentially replace and update my earlier Pluralsight courses on the same topic. Despite some &lt;a href="https://markheath.net/post/2025/2/24/microservices-pushback"&gt;recent pushback against microservices&lt;/a&gt; in the industry, it remains a valuable and important architectural approach, and the core principles of microservices are relevant whenever you're building a distributed application (which for a lot of us, is all the time).&lt;/p&gt;
&lt;p&gt;The first microservice course was &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt;, which covers some of the key principles for designing scalable and modular microservice architectures and explores the value of service meshes and continuous delivery pipelines. And the second was &lt;a href="https://www.pluralsight.com/courses/microservices-building-testing"&gt;Microservices: Building and Testing&lt;/a&gt; which focuses in more detail on topics like implementing the domain logic, as well as how to test and deploy microservices.&lt;/p&gt;
&lt;h3 id="carpal-tunnel-surgery"&gt;Carpal Tunnel Surgery&lt;/h3&gt;
&lt;p&gt;Another reason for my reduced blogging output this year was some health issues. I've been battling back pain for a few years, although this year a strict regime of daily stretches and exercises and much more use of a standing desk seems to have helped a lot, and I'm a lot better than I was. For any younger developers reading this, make sure you look after your back - it's frustratingly slow to recover once you've injured it!&lt;/p&gt;
&lt;p&gt;I've also been having a lot of issues with hand numbness and finally had carpal tunnel surgery on my left hand (which was my worst) midway through the year. I was quite apprehensive about whether it would impact or even eliminate my ability to play guitar but I'm pleased to report that my strength and flexibility returned enough after a couple of months to continue playing as before. Thankfully my right hand isn't as bad, so I'm not in a rush to get that one done yet.&lt;/p&gt;
&lt;h3 id="music-and-audio"&gt;Music and Audio&lt;/h3&gt;
&lt;p&gt;As you may know, one of my favourite hobbies is playing and recording music, and this year even with a break for carpal tunnel surgery I managed to play guitar or piano live at 32 events, as well as participated in recording a live album which was a first for me.&lt;/p&gt;
&lt;p&gt;I also continued my tradition of composing and writing one instrumental song a month (which I occasionally batch up into albums that you can find here on &lt;a href="https://open.spotify.com/artist/4036iD5XfdOJvs4MNVZlSY"&gt;Spotify&lt;/a&gt; or &lt;a href="https://markheath.bandcamp.com/"&gt;Bandcamp&lt;/a&gt; or just listen to them as they come out on &lt;a href="https://www.youtube.com/@mark_heath"&gt;YouTube&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;This Christmas I upgraded my long-serving Yamaha MODX 7 keyboard to the newer &lt;a href="https://usa.yamaha.com/products/music_production/synthesizers/modxm/index.html"&gt;Yamaha MODX M7&lt;/a&gt; which is a very nice upgrade with the new ANX audio engine, better AWM polyphony and an improved user interface. It's also interesting to me that we are seeing an increasing number of hardware synthesizers providing fully software versions of their sounds, meaning that you can much more easily transition between studio and live playing using the same sounds (&lt;a href="https://www.arturia.com/products/hardware-synths/astrolab/astrolab-37"&gt;Arturia's Astrolab series&lt;/a&gt; also does this).&lt;/p&gt;
&lt;p&gt;In terms of guitar tech, I'm still very happy with my &lt;a href="https://line6.com/helix/helix-lt.html"&gt;Line 6 Helix LT&lt;/a&gt; and &lt;a href="https://www.ikmultimedia.com/products/tonexpedal/?pkey=tonex-pedal"&gt;IK Multimedia TONEX&lt;/a&gt;, which between them give me access to a very wide variety of tones and effects. Again it's a very fast-moving space, with many exciting new software and hardware products being released and we're also seeing machine learning take a much more prominent role in music production (a trend I expect to increase in 2026).&lt;/p&gt;
&lt;h3 id="ai.net-and-azure"&gt;AI, .NET and Azure&lt;/h3&gt;
&lt;p&gt;My day job continues to revolve mostly around .NET and Azure, as well as increasingly incorporating various AI technologies (both in the development process and to power new functionality).&lt;/p&gt;
&lt;p&gt;My work with Azure this year has been a lot less on learning about new services, and more on how to deliver excellent resilience, scalability, and performance. I hope to feed a lot of the lessons I've learned into upcoming Pluralsight courses and talks.&lt;/p&gt;
&lt;p&gt;I'm also hoping to find more time this year to go deeper with Azure Container Apps and Dapr which both have a lot to offer to simplify the process of building and deploying microservices and distributed applications.&lt;/p&gt;
&lt;p&gt;It's great to see that each new version of .NET manages to squeeze out more performance improvements, and this has meant I have never regretted choosing .NET as my main development platform. (Still hoping for discriminated unions in C# though!)&lt;/p&gt;
&lt;p&gt;Of course, there was also a lot of AI this year. I am both an AI enthusiast and an AI skeptic - it has potential to be very helpful but also very harmful. A key skill for all developers is knowing when and how to use it effectively.&lt;/p&gt;
&lt;p&gt;I did attempt the &lt;a href="https://adventofcode.com/2025"&gt;Advent of Code&lt;/a&gt; challenges again this year, forcing myself to do them without the help of AI. Sadly didn't manage to complete all the challenges due to time constraints, so I'd like to cycle back to the two I missed if I get a chance later in the year.&lt;/p&gt;
&lt;h3 id="whats-next"&gt;What's next?&lt;/h3&gt;
&lt;p&gt;As for what's in store for next year, there's a good chance that I'll be creating one or two additional Pluralsight courses, although that's not been confirmed yet.&lt;/p&gt;
&lt;p&gt;I took a break from speaking at conferences when my back was at its worst, and haven't currently got any new talks planned, but maybe if things continue to go well this year I might consider taking that up again.&lt;/p&gt;
&lt;p&gt;And I think this might be my final year as Microsoft MVP as I have not been able to contribute as much as I have in previous years. It's been a great privilege to be part of the MVP program for nearly 10 years now, so I'll take the opportunity to say a big thank you to the MVP organizers and the other MVPs for all they do to ensure that .NET developers get access to great learning resources.&lt;/p&gt;
&lt;p&gt;Once again, a big thank you to everyone who has read this blog or watched my Pluralsight courses. I hope you've found them helpful and thanks for all the encouraging feedback.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Year in review</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/12/31/2025-year-in-review</guid>
  <pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Are Microservices Becoming Easier?</title>
  <link>https://markheath.net/post/2025/7/10/microservices-architectural-strategies-techniques</link>
  <description>&lt;p&gt;I've been a bit quiet on this blog recently mainly because I've been busy working on a new Pluralsight course &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt;, which essentially replaces my previous &lt;a href="https://www.pluralsight.com/courses/microservices-fundamentals"&gt;Microservices Fundamentals&lt;/a&gt; course, although they cover slightly different topics. In this course, I wanted to make sure I addressed some of the &lt;a href="https://markheath.net/post/2025/2/24/microservices-pushback"&gt;&amp;quot;pushback&amp;quot; against microservices&lt;/a&gt;, as it's fair to say that there has been some legitimate questions asked about whether microservices are being applied to problems where they don't actually help.&lt;/p&gt;
&lt;p&gt;However despite their challenges, I do think there are situations in which microservices can make a lot of sense. And that's because when a software product becomes large enough, with many teams of developers, and many user-facing websites or applications, and many APIs, then it's inevitable that it becomes a distributed system.&lt;/p&gt;
&lt;p&gt;In some ways you can think of microservices as simply a more disciplined approach to distributed systems, where you take care to ensure that each service is &lt;strong&gt;independently deployable&lt;/strong&gt;. This helps you avoid the pitfall of building a &amp;quot;distributed monolith&amp;quot; - an architecture famous for combining the worst aspects of both monoliths and distributed systems.&lt;/p&gt;
&lt;p&gt;In fact, the majority of the tools, techniques and strategies I discuss in the course are not strictly specific to microservices. That's because most of the key concerns about observability, security, scalability, testability, automated deployment are things that you'll need in a distributed system regardless of whether you are explicitly trying to create &amp;quot;microservices&amp;quot;.&lt;/p&gt;
&lt;h3 id="are-microservices-becoming-easier"&gt;Are Microservices Becoming Easier?&lt;/h3&gt;
&lt;p&gt;One of the hopes in the early days of microservices was that over time, we'd develop tooling that helped us overcome many of the challenges of building, testing, and deploying distributed systems.&lt;/p&gt;
&lt;p&gt;In some ways that is true. For example, Kubernetes is incredibly powerful and flexible and has established itself as the de-facto standard for hosting microservices. However, I certainly wouldn't describe it as simple to learn and manage. But we are seeing the emergence of simplified microservices hosting platforms, such as &lt;a href="https://learn.microsoft.com/en-us/azure/container-apps/overview"&gt;Azure Container Apps&lt;/a&gt; which is built on top of Kubernetes, but takes away a lot of the complexity and streamlines the process of hosting your microservices.&lt;/p&gt;
&lt;p&gt;Another favourite toolkit of mine for building microservices is &lt;a href="https://dapr.io/"&gt;Dapr&lt;/a&gt;, which offers a set of &amp;quot;building blocks&amp;quot;, to enable you to build secure and reliable microservices. I've actually created a &lt;a href="https://app.pluralsight.com/library/courses/dapr-1-fundamentals"&gt;Dapr Fundamentals&lt;/a&gt; Pluralsight course. The way Dapr delivers these capabilities is by exposing APIs from a sidecar container. This approach has the benefit of making Dapr programming language agnostic, and cloud-agnostic as the building blocks each offer a variety of backing services to implement the capability, giving you a lot of freedom to use the languages and services you are familiar with.&lt;/p&gt;
&lt;p&gt;In the .NET world, &lt;a href="https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview"&gt;.NET Aspire&lt;/a&gt; aims to improve the experience of building microservices by providing various tools, templates and packages that especially enhance the local development experience. So it does feel like things are moving in the right direction towards simplifying the overall microservices experience.&lt;/p&gt;
&lt;p&gt;And in my Pluralsight course I also wanted to include a brief section exploring the ways in which AI is able to streamline the experience of building, deploying and managing microservice applications. A lot of the pain points of microservices revolve around the complexities of managing a system made up of so many interconnected parts. It's still early days for AI, but I am hopeful that it could make a big difference especially in the area of monitoring and troubleshooting distributed systems.&lt;/p&gt;
&lt;h3 id="summary"&gt;Summary&lt;/h3&gt;
&lt;p&gt;Microservices remain an valuable architectural pattern, despite the potential troubles you can run into with them. Generally, my architectural preference is to keep things as simple as possible, and only reach for more advanced patterns and tools when you have proved that you really need them. So most of the tools and techniques I show in the course are not so much a prescription of what you should do, as suggestions for things you might reach for if you're experiencing the problems they're designed to solve. If you're a Pluralsight subscriber, why not check out my &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt; course, and as always I'm very interested in learning from other people's experiences so do feel free to get in touch via the comments.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Microservices</category>
  <category>Pluralsight</category>
  <category>dapr</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/7/10/microservices-architectural-strategies-techniques</guid>
  <pubDate>Thu, 10 Jul 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Calling MCP Servers in C# with Microsoft.Extensions.AI</title>
  <link>https://markheath.net/post/2025/4/14/calling-mcp-server-microsoft-extensions-ai</link>
  <description>&lt;p&gt;I posted recently about how to allow &lt;a href="https://markheath.net/post/2025/1/18/using-tools-safely-with-llms"&gt;LLMs to call tools&lt;/a&gt; using the Microsoft.Extensions.AI NuGet package in C#.&lt;/p&gt;
&lt;p&gt;Obviously, a common usage scenario would be to expose MCP servers as tools for your LLM to call. Thankfully, the new &lt;a href="https://www.nuget.org/packages/ModelContextProtocol"&gt;ModelContextProtocol NuGet package&lt;/a&gt; makes this straightforward.&lt;/p&gt;
&lt;p&gt;Note: This package is still in pre-release (as is Microsoft.Extensions.AI), so do check the release notes for any breaking changes to the API.&lt;/p&gt;
&lt;p&gt;I've updated my &lt;a href="https://github.com/markheath/open-ai-test1/"&gt;demo application&lt;/a&gt; to support calling MCP tools, following the techniques demonstrated in Microsoft's &lt;a href="https://github.com/modelcontextprotocol/csharp-sdk/blob/main/samples/ChatWithTools"&gt;Chat With Tools&lt;/a&gt; sample.&lt;/p&gt;
&lt;p&gt;The first step is simply to reference the ModelContextProtocol NuGet package. I had to also update the &lt;a href="https://www.nuget.org/packages/Microsoft.Extensions.AI"&gt;Microsoft.Extensions.AI&lt;/a&gt; versions as well. Here's the versions I used for my test:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-xml"&gt;&amp;lt;PackageReference Include=&amp;quot;Microsoft.Extensions.AI&amp;quot; Version=&amp;quot;9.4.0-preview.1.25207.5&amp;quot; /&amp;gt;
&amp;lt;PackageReference Include=&amp;quot;Microsoft.Extensions.AI.OpenAI&amp;quot; Version=&amp;quot;9.4.0-preview.1.25207.5&amp;quot; /&amp;gt;
&amp;lt;PackageReference Include=&amp;quot;ModelContextProtocol&amp;quot; Version=&amp;quot;0.1.0-preview.8&amp;quot; /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next step is simply to connect to an MCP server. You can do this with the &lt;code&gt;McpClientFactory&lt;/code&gt;. Here, we're using &lt;code&gt;npx&lt;/code&gt; (which comes with Node) to run a simple example MCP server called the &lt;a href="https://www.npmjs.com/package/@modelcontextprotocol/server-everything"&gt;&amp;quot;Everything&amp;quot; server&lt;/a&gt; as it demonstrates the range of capabilities of an MCP server.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var mcpClient = await McpClientFactory.CreateAsync(
    new StdioClientTransport(new()
    {
        Command = &amp;quot;npx&amp;quot;,
        Arguments = [&amp;quot;-y&amp;quot;, &amp;quot;--verbose&amp;quot;, &amp;quot;@modelcontextprotocol/server-everything&amp;quot;],
        Name = &amp;quot;Everything&amp;quot;,
    }));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can then use the MCP client to list the available tools:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var tools = await mcpClient.ListToolsAsync();
Console.WriteLine(&amp;quot;Available tools:&amp;quot;);
foreach (var tool in tools)
{
    Console.WriteLine($&amp;quot;  {tool.Name}: {tool.Description}&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These tools are instances of &lt;code&gt;McpClientTool&lt;/code&gt;, which inherits from &lt;code&gt;AIFunction&lt;/code&gt;, meaning that we can pass them directly in as tools to an instance of  &lt;code&gt;ChatOptions&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var chatOptions = new ChatOptions
{
    Tools = [..tools]
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then using the tools is simply a case of passing those options into the call to &lt;code&gt;GetStreamingResponseAsync&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt; await foreach (var item in chatClient.GetStreamingResponseAsync(
        chatHistory, chatOptions))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Although it's early days for the MCP protocol, it's very pleasing to see how easy it is to get your LLM calling tools provided by an MCP server. For the full code sample, showing how to get this working with Azure OpenAI service, check my &lt;a href="https://github.com/markheath/open-ai-test1/"&gt;demo repo here&lt;/a&gt;.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>C#</category>
  <category>MCP</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/4/14/calling-mcp-server-microsoft-extensions-ai</guid>
  <pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
</item></channel>
</rss>