﻿<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
  <channel><title>Sound Code - Mark Heath's Blog</title>
<description>Mark Heath's development blog.</description>
<generator>MarkBlog</generator>
<link>https://markheath.net/</link>
<item>
  <title>The Future of Tech Blogging in the Age of AI</title>
  <link>https://markheath.net/post/2026/4/1/future-of-tech-blogging</link>
  <description>&lt;p&gt;I've been blogging on this site for almost 20 years now, and the majority of my posts are simple coding tutorials, where I share what I've learned as I explore various new technologies (my journey on this blog has taken me through Silverlight, WPF, IronPython, Mercurial, LINQ, F#, Azure, and much more).&lt;/p&gt;
&lt;p&gt;My process has always been quite simple. First, I work through a technical challenge and eventually get something working. And then, I write some instructions for how to do it.&lt;/p&gt;
&lt;h2 id="benefits-of-tech-blogging"&gt;Benefits of tech blogging&lt;/h2&gt;
&lt;p&gt;There are many benefits to sharing your progress like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The process of putting it into writing helps solidify what you learned&lt;/li&gt;
&lt;li&gt;Despite this I still often forget how I achieved something, so my blog functions as a journal I can refer back to later&lt;/li&gt;
&lt;li&gt;You're supporting the wider developer community by sharing proven ways to get something working&lt;/li&gt;
&lt;li&gt;Thanks to &amp;quot;Cunningham's Law&amp;quot; (&lt;em&gt;&amp;quot;the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer.&amp;quot;&lt;/em&gt;), your post may lead you to discover a better way to achieve the same goal, or a fatal flaw in your approach&lt;/li&gt;
&lt;li&gt;And gradually it builds your personal reputation and credibility, as eventually you'll build up visitors (although you may find that your &lt;a href="https://www.markheath.net/post/2016/9/22/customize-radio-button-css"&gt;most popular post of all time&lt;/a&gt; is on the one topic that you most certainly aren't an expert on!)&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="are-llms-going-to-ruin-it-all"&gt;Are LLMs going to ruin it all?&lt;/h2&gt;
&lt;p&gt;But recently I've been wondering - are LLM's going to put an end to coding tutorial blogs like mine? Do they render it all pointless?&lt;/p&gt;
&lt;p&gt;For starters, GitHub Copilot and Claude Code have already dramatically changed the way I go about exploring a new technique or technology. Instead of slogging through Bicep documentation, and endlessly debugging why my template didn't work, I now just ask the AI model to create one for me.&lt;/p&gt;
&lt;p&gt;Refreshingly, I notice that it gets it wrong just as frequently as I do, but it doesn't get frustrated - it just keeps battling away until eventually it gets something working.&lt;/p&gt;
&lt;p&gt;But now it feels like a hollow victory. Is there even any point writing a tutorial about it? If you can simply ask an agent to solve the problem, why would anyone need to read &lt;em&gt;my&lt;/em&gt; tutorial? Are developers even going to bother visiting blogs like mine in the future?&lt;/p&gt;
&lt;p&gt;And then there's the question of who &lt;em&gt;writes&lt;/em&gt; the tutorial? Not only is the agent much quicker than me at solving the technical challenge, it's also significantly faster at writing the tutorial, and undeniably a better writer than me too. So maybe I should just let it write the article for me? But the internet is already full of AI-generated slop...&lt;/p&gt;
&lt;h2 id="should-you-let-ai-write-your-blog-posts"&gt;Should you let AI write your blog posts?&lt;/h2&gt;
&lt;p&gt;This is a deeply polarizing question. There's a number of possible options:&lt;/p&gt;
&lt;h3 id="level-1-human-only"&gt;Level 1: Human only&lt;/h3&gt;
&lt;p&gt;You could insist on hand-writing everything yourself, with strictly no AI assistance. That's what you're reading right now (if you can't already tell from the decidedly mediocre writing style!)&lt;/p&gt;
&lt;p&gt;This mirrors a big debate going on in the world of music production at the moment. If AI tools like Suno can generate an entire song from a single prompt, that sounds far more polished than anything I've ever managed to produce, then does that spell the end of real humans writing and recording songs? And should we fight against it or just embrace it as the future?&lt;/p&gt;
&lt;p&gt;I think tech tutorials do fall into a different category to music though. If I want to learn how to achieve X with technology Y, I just want clear, concise and correct instructions - and I'm not overly bothered whether it came 100% from a human mind or not.&lt;/p&gt;
&lt;p&gt;Having said that, we've already identified a key benefit of writing your own tutorials: it helps solidify what you've learned. Doing your own writing will also improve your own powers of communication. For those reasons alone I have no intention of delegating all my blog writing to LLMs.&lt;/p&gt;
&lt;h3 id="level-2-human-writes-ai-refines"&gt;Level 2: Human writes, AI refines&lt;/h3&gt;
&lt;p&gt;On the other hand, it seems churlish to refuse to take advantage of the benefits of LLM for proof reading, fact checking, and stylistic improvements. When I posted recently about &lt;a href="https://markheath.net/post/2026/3/30/does-code-quality-still-matter"&gt;does code quality still matter&lt;/a&gt; this is exactly what I did. I wrote the post myself, and then asked Claude Code to help me refine it, by critiquing my thoughts and providing counter-arguments.&lt;/p&gt;
&lt;p&gt;To be honest, I ignored most of the feedback, but undoubtedly it improved the final article. This is the approach I've been taking with my Pluralsight course scripts - I first write the whole thing myself, and then ask an LLM to take me to task and tell me all the things I got wrong. (Although they're still ridiculously sycophantic and tell me it's the greatest thing they've ever read on the topic of lazy loading!)&lt;/p&gt;
&lt;h3 id="level-3-ai-writes-human-refines"&gt;Level 3: AI writes, human refines&lt;/h3&gt;
&lt;p&gt;But of course, my time is at a premium. A blog tutorial often takes me well over two hours to write. That's a big time investment for something that will likely barely be read by anyone.&lt;/p&gt;
&lt;p&gt;And if all I'm producing is a tutorial, perhaps it would be better for me to get the LLM to do the leg-work of creating the structure and initial draft, and then I can edit afterwards, adapting the language to sound a bit more in my voice, and deleting some of the most egregious AI-speak.&lt;/p&gt;
&lt;p&gt;That's exactly what I tried with a recent post on &lt;a href="https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints"&gt;private endpoints&lt;/a&gt;. Claude Code not only created the Bicep and test application, but once it was done I got it to write up the instructions and even create a GitHub repo of sample code. The end result was far more thorough than I would have managed myself, and although I read the whole thing carefully and edited it a bit, I have to admit that most of the time I couldn't think of better ways to phrase each sentence, so a lot of it ended up unchanged.&lt;/p&gt;
&lt;p&gt;That left a bad taste in my mouth to be honest. If I do that too often will I lose credibility and scare away readers? And yet I do feel like it was a genuinely valuable article that shows how to solve a problem that I'd been wanting to blog about for a long time.&lt;/p&gt;
&lt;h3 id="level-4-ai-only"&gt;Level 4: AI only&lt;/h3&gt;
&lt;p&gt;Of course, there is a level further, and now we are getting to the dark side. Could I ask Claude or ChatGPT to write me a blog post and just publish it without even reading it myself? I could instruct it to mimic my writing style, and it might even do a good enough job to go unnoticed? Maybe at some time in the future, Claude can dethrone my most popular article with one it wrote entirely itself.&lt;/p&gt;
&lt;p&gt;To be honest, I have no interest in doing that at all - it undermines the &lt;em&gt;purpose&lt;/em&gt; of this blog which is a way for &lt;em&gt;me&lt;/em&gt; to share the things that &lt;em&gt;I&lt;/em&gt; have learned. So I can assure you I have no intention of filling this site up with &amp;quot;slop&amp;quot; articles where the LLM has come up with the idea, written and tested the code, and published the article all without me having to be involved at all.&lt;/p&gt;
&lt;p&gt;But interestingly, this approach might make sense for back-filling the documentation for my open-source project &lt;a href="https://github.com/naudio/NAudio/"&gt;NAudio&lt;/a&gt;. Over the years I've written close to one hundred tutorials but there are still major gaps in the documentation.&lt;/p&gt;
&lt;p&gt;I'm thinking of experimenting with asking Claude Code to write a short tutorial for every public class in the NAudio repo, and to then check its work by following the tutorial and making sure it really works.&lt;/p&gt;
&lt;p&gt;I expect we're going to see an explosion of this approach to, and it could be a genuine positive for the open source community, where documentation is often lacking and outdated. If LLMs are to make a positive contribution to the world of coding tutorials, this is probably one of the best ways they can be utilized.&lt;/p&gt;
&lt;h2 id="why-tech-blogging-still-matters"&gt;Why tech blogging still matters&lt;/h2&gt;
&lt;p&gt;If you're still with me at this point, well done - I know I've gone on too long. Even humans can be as long-winded as LLMs sometimes. But the process of writing down my thoughts on this issue has helped me gain some clarity, and made me realise that it doesn't necessarily matter whether or not I take an AI-free, AI-assisted or even a AI-first approach to my posts.&lt;/p&gt;
&lt;p&gt;The value of sharing these coding tutorials is that the problems I'm solving are &lt;em&gt;real-world problems&lt;/em&gt;. They are tasks that I genuinely needed to accomplish, and came with unique constraints and requirements that are specific to my circumstances. That gives them an authenticity that an AI can't fake. At best it can guess at what humans might want to achieve, and create a tutorials about that.&lt;/p&gt;
&lt;p&gt;So when I'm reading your tech blog (which I hope you'll share a link to), I won't really care whether or not you used ChatGPT to create the sample code, or make you sound like a Pulitzer prize winner. I'll be interested because you're sharing &lt;em&gt;your&lt;/em&gt; experience of how you solved a problem using the tools at your disposal.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/4/1/future-of-tech-blogging</guid>
  <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Securing Back-end App Service Web Apps with Private Endpoints</title>
  <link>https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints</link>
  <description>&lt;p&gt;Back in 2019, I wrote about &lt;a href="https://markheath.net/post/2019/5/24/securing-backend-appservice-webapps"&gt;securing back-end App Service web apps using VNets and Service Endpoints&lt;/a&gt;. That approach worked well at the time, but Azure has moved on significantly since then. In this post, I'll show the modern way to achieve the same thing using &lt;strong&gt;Private Endpoints&lt;/strong&gt; — which is now Microsoft's recommended approach.&lt;/p&gt;
&lt;h2 id="the-problem"&gt;The Problem&lt;/h2&gt;
&lt;p&gt;The scenario is the same as before. You have a front-end web app and a back-end API, both hosted on Azure App Service. End users need to reach the front-end, but the back-end should only be callable from the front-end. No one on the public internet should be able to reach it directly.&lt;/p&gt;
&lt;pre class="mermaid"&gt;flowchart LR
    User([Internet User]) --&gt;|✅ allowed| FE[Frontend Web App]
    FE --&gt;|✅ allowed| BE[Backend API]
    User --&gt;|❌ blocked| BE
&lt;/pre&gt;
&lt;p&gt;This is a standard multi-tier architecture. With VMs or containers in a VNet, you'd simply not expose a public endpoint for the back-end. But App Service web apps have always had public endpoints by default — and until recently, locking them down was either fiddly (Service Endpoints) or expensive (App Service Environments).&lt;/p&gt;
&lt;h2 id="what-changed-since-2019"&gt;What Changed Since 2019?&lt;/h2&gt;
&lt;p&gt;My 2019 approach used three features together:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;VNet Integration&lt;/strong&gt; — route the front-end's outbound traffic through a VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Service Endpoints&lt;/strong&gt; — set up routing so App Service traffic flows through the VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Access Restrictions&lt;/strong&gt; — whitelist the VNet subnet on the back-end&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This worked, but had some limitations. Service Endpoints don't prevent data exfiltration (traffic is scoped to the entire App Service, not your specific app), and the Access Restrictions approach required some tricky Azure CLI commands to set up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Private Endpoints&lt;/strong&gt; are now the recommended replacement. Microsoft's documentation is &lt;a href="https://learn.microsoft.com/en-us/azure/virtual-network/vnet-integration-for-azure-services#compare-private-endpoints-and-service-endpoints"&gt;explicit about this&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Microsoft recommends using Azure Private Link. Private Link offers better capabilities for privately accessing PaaS from on-premises, provides built-in data-exfiltration protection, and maps services to private IPs in your own network.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here's what makes Private Endpoints better:&lt;/p&gt;
&lt;table class="md-table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Service Endpoints (2019)&lt;/th&gt;
&lt;th&gt;Private Endpoints (2026)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scope&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Entire App Service&lt;/td&gt;
&lt;td&gt;Your specific app only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data exfiltration protection&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Public access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Still reachable (blocked by rules)&lt;/td&gt;
&lt;td&gt;Blocked (access restrictions + Private Endpoint)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;On-premises access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (via VPN/ExpressRoute)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Moderate (fiddly CLI commands)&lt;/td&gt;
&lt;td&gt;Straightforward (Bicep)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;~$8/month&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The small cost is well worth it for the significantly stronger security posture.&lt;/p&gt;
&lt;h2 id="architecture-overview"&gt;Architecture Overview&lt;/h2&gt;
&lt;p&gt;Here's what we're going to build:&lt;/p&gt;
&lt;pre class="mermaid"&gt;flowchart TB
    subgraph Internet
        User([Internet User])
    end

    subgraph Azure
        subgraph VNet ["Virtual Network (10.0.0.0/16)"]
            subgraph IntSub ["integration-subnet (10.0.0.0/24)"]
            end
            subgraph PeSub ["pe-subnet (10.0.1.0/24)"]
                PE[Private Endpoint&amp;lt;br/&gt;10.0.1.4]
            end
        end

        FE[Frontend Web App&amp;lt;br/&gt;public access]
        BE[Backend API&amp;lt;br/&gt;main site blocked]
        DNS[Private DNS Zone&amp;lt;br/&gt;privatelink.azurewebsites.net]
    end

    User --&gt;|HTTPS| FE
    FE -.-&gt;|VNet Integration| IntSub
    IntSub --&gt;|private network| PE
    PE --&gt;|Private Link| BE
    DNS -.-&gt;|resolves backend&amp;lt;br/&gt;to 10.0.1.4| VNet
    User -.-&gt;|❌ 403 Forbidden| BE

    style BE fill:#f96,stroke:#333
    style FE fill:#6f9,stroke:#333
    style PE fill:#69f,stroke:#333
&lt;/pre&gt;
&lt;p&gt;The key components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VNet&lt;/strong&gt; with two subnets: one for VNet Integration (delegated to App Service), one for the Private Endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Frontend Web App&lt;/strong&gt; — publicly accessible, with VNet Integration so its outbound traffic goes through the VNet&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backend API&lt;/strong&gt; — main site blocked by access restrictions, reachable only via Private Endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Private DNS Zone&lt;/strong&gt; — resolves &lt;code&gt;backend-xxx.azurewebsites.net&lt;/code&gt; to the private IP within the VNet&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When the front-end calls the back-end, DNS resolution within the VNet returns the private IP (e.g. &lt;code&gt;10.0.1.4&lt;/code&gt;), and traffic flows through the Microsoft backbone via Private Link. Anyone on the public internet trying to reach the back-end gets a &lt;code&gt;403 Forbidden&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="the-sample-apps"&gt;The Sample Apps&lt;/h2&gt;
&lt;p&gt;I used Claude Code to help me create two minimal ASP.NET Core (.NET 10) apps to demonstrate this. The back-end is a simple API:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet(&amp;quot;/api/greeting&amp;quot;, () =&amp;gt; new
{
    message = &amp;quot;Hello from the secure backend!&amp;quot;,
    timestamp = DateTime.UtcNow
});

app.MapGet(&amp;quot;/health&amp;quot;, () =&amp;gt; Results.Ok(&amp;quot;Healthy&amp;quot;));

app.Run();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The front-end is a Razor Pages app that calls the back-end. The key part is the &lt;code&gt;HttpClient&lt;/code&gt; setup in &lt;code&gt;Program.cs&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;builder.Services.AddHttpClient(&amp;quot;BackendApi&amp;quot;, client =&amp;gt;
{
    var baseUrl = builder.Configuration[&amp;quot;BackendApi:BaseUrl&amp;quot;]
        ?? &amp;quot;http://localhost:5100&amp;quot;;
    client.BaseAddress = new Uri(baseUrl);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And the page model that calls it:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;public async Task OnGetAsync()
{
    try
    {
        var client = _httpClientFactory.CreateClient(&amp;quot;BackendApi&amp;quot;);
        var response = await client.GetAsync(&amp;quot;/api/greeting&amp;quot;);
        response.EnsureSuccessStatusCode();

        var json = await response.Content.ReadFromJsonAsync&amp;lt;JsonElement&amp;gt;();
        GreetingMessage = json.GetProperty(&amp;quot;message&amp;quot;).GetString();
        GreetingTimestamp = json.GetProperty(&amp;quot;timestamp&amp;quot;).GetString();
    }
    catch (Exception ex)
    {
        ErrorMessage = $&amp;quot;Failed to reach backend: {ex.Message}&amp;quot;;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All pretty straightforward. The backend security is all handled by the infrastructure.&lt;/p&gt;
&lt;h2 id="the-bicep-template"&gt;The Bicep Template&lt;/h2&gt;
&lt;p&gt;This is where the interesting stuff happens. Lets look at the key resources.&lt;/p&gt;
&lt;h3 id="virtual-network"&gt;Virtual Network&lt;/h3&gt;
&lt;p&gt;We need a VNet with two subnets. The integration subnet is delegated to &lt;code&gt;Microsoft.Web/serverFarms&lt;/code&gt; (required for VNet Integration). The private endpoint subnet has &lt;code&gt;privateEndpointNetworkPolicies&lt;/code&gt; set to &lt;code&gt;Disabled&lt;/code&gt; (required for Private Endpoints).&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource vnet 'Microsoft.Network/virtualNetworks@2024-05-01' = {
  name: vnetName
  location: location
  properties: {
    addressSpace: {
      addressPrefixes: ['10.0.0.0/16']
    }
    subnets: [
      {
        name: 'integration-subnet'
        properties: {
          addressPrefix: '10.0.0.0/24'
          delegations: [{
            name: 'delegation'
            properties: {
              serviceName: 'Microsoft.Web/serverFarms'
            }
          }]
        }
      }
      {
        name: 'pe-subnet'
        properties: {
          addressPrefix: '10.0.1.0/24'
          privateEndpointNetworkPolicies: 'Disabled'
        }
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="app-service-plan-and-web-apps"&gt;App Service Plan and Web Apps&lt;/h3&gt;
&lt;p&gt;Both apps share a single Linux B1 App Service Plan. The front-end has &lt;code&gt;virtualNetworkSubnetId&lt;/code&gt; set to the integration subnet, which routes its outbound traffic through the VNet.&lt;/p&gt;
&lt;p&gt;For the back-end, you might think we'd just set &lt;code&gt;publicNetworkAccess: 'Disabled'&lt;/code&gt;. That does work for blocking internet traffic, but it also blocks the SCM/Kudu deployment endpoint — meaning you can't deploy your code with &lt;code&gt;az webapp deploy&lt;/code&gt; any more. Instead, we use access restrictions: &lt;code&gt;ipSecurityRestrictionsDefaultAction: 'Deny'&lt;/code&gt; blocks all public traffic to the main site, while &lt;code&gt;scmIpSecurityRestrictionsUseMain: false&lt;/code&gt; with &lt;code&gt;scmIpSecurityRestrictionsDefaultAction: 'Allow'&lt;/code&gt; keeps the deployment endpoint accessible. The Private Endpoint ensures the front-end can still reach the back-end over the private network.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource appServicePlan 'Microsoft.Web/serverfarms@2024-04-01' = {
  name: appServicePlanName
  location: location
  kind: 'linux'
  sku: { name: 'B1' }
  properties: { reserved: true }
}

resource backendApp 'Microsoft.Web/sites@2024-04-01' = {
  name: backendAppName
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    publicNetworkAccess: 'Enabled'
    siteConfig: {
      linuxFxVersion: 'DOTNETCORE|10.0'
      ipSecurityRestrictionsDefaultAction: 'Deny'
      scmIpSecurityRestrictionsUseMain: false
      scmIpSecurityRestrictionsDefaultAction: 'Allow'
    }
  }
}

resource frontendApp 'Microsoft.Web/sites@2024-04-01' = {
  name: frontendAppName
  location: location
  properties: {
    serverFarmId: appServicePlan.id
    virtualNetworkSubnetId: vnet.properties.subnets[0].id
    siteConfig: {
      linuxFxVersion: 'DOTNETCORE|10.0'
      appSettings: [{
        name: 'BackendApi__BaseUrl'
        value: 'https://${backendAppName}.azurewebsites.net'
      }]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note that our approach leaves the SCM/Kudu deployment endpoint publicly accessible (it's authenticated, so the risk is low). If you want to eliminate that surface area entirely, you could set &lt;code&gt;publicNetworkAccess: 'Disabled'&lt;/code&gt; and use an alternative deployment method that bypasses Kudu — for example, run-from-package with &lt;code&gt;WEBSITE_RUN_FROM_PACKAGE&lt;/code&gt; pointing at a blob storage URL, or containerizing your app and pulling from ACR. Both approaches mean the backend never needs a public endpoint at all, though you may need to add VNet integration to the backend for outbound access to the storage account or registry if those are private too.&lt;/p&gt;
&lt;h3 id="private-endpoint-and-dns"&gt;Private Endpoint and DNS&lt;/h3&gt;
&lt;p&gt;The Private Endpoint creates a network interface in the PE subnet that's connected to the back-end app. The Private DNS Zone ensures that &lt;code&gt;backend-xxx.azurewebsites.net&lt;/code&gt; resolves to the private IP when queried from within the VNet.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bicep"&gt;resource privateEndpoint 'Microsoft.Network/privateEndpoints@2024-05-01' = {
  name: 'pe-${backendAppName}'
  location: location
  properties: {
    subnet: { id: vnet.properties.subnets[1].id }
    privateLinkServiceConnections: [{
      name: 'pe-${backendAppName}'
      properties: {
        privateLinkServiceId: backendApp.id
        groupIds: ['sites']
      }
    }]
  }
}

resource privateDnsZone 'Microsoft.Network/privateDnsZones@2024-06-01' = {
  name: 'privatelink.azurewebsites.net'
  location: 'global'
}

resource dnsZoneLink 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2024-06-01' = {
  parent: privateDnsZone
  name: '${vnetName}-link'
  location: 'global'
  properties: {
    virtualNetwork: { id: vnet.id }
    registrationEnabled: false
  }
}

resource dnsZoneGroup 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2024-05-01' = {
  parent: privateEndpoint
  name: 'default'
  properties: {
    privateDnsZoneConfigs: [{
      name: 'privatelink-azurewebsites-net'
      properties: { privateDnsZoneId: privateDnsZone.id }
    }]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="deploying"&gt;Deploying&lt;/h2&gt;
&lt;p&gt;I've created a PowerShell deployment script that uses the Azure CLI. Here are the key steps:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;# Create the resource group
az group create -n SecureBackendDemo -l uksouth

# Deploy the Bicep template
az deployment group create `
    -g SecureBackendDemo `
    --template-file ./infra/main.bicep
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Bicep deployment creates all the networking and App Service resources. After that, we publish and deploy both .NET apps:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;# Build and publish
dotnet publish src/Backend/Backend.csproj -c Release -o publish/backend
dotnet publish src/Frontend/Frontend.csproj -c Release -o publish/frontend

# Package as zip
Compress-Archive -Path &amp;quot;publish/backend/*&amp;quot; -DestinationPath publish/backend.zip
Compress-Archive -Path &amp;quot;publish/frontend/*&amp;quot; -DestinationPath publish/frontend.zip

# Deploy to App Service
az webapp deploy -g SecureBackendDemo -n $backendAppName --src-path publish/backend.zip --type zip
az webapp deploy -g SecureBackendDemo -n $frontendAppName --src-path publish/frontend.zip --type zip
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The full deployment script is in the &lt;a href="https://github.com/markheath/securing-backend-appservices-private-endpoints/blob/main/deploy/deploy.ps1"&gt;repository&lt;/a&gt; — just run &lt;code&gt;.\deploy\deploy.ps1&lt;/code&gt; and it handles everything.&lt;/p&gt;
&lt;h2 id="testing"&gt;Testing&lt;/h2&gt;
&lt;p&gt;Once deployed, we can verify the security is working:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Test 1: Frontend is accessible and shows the backend greeting&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;$response = Invoke-WebRequest -Uri $frontendUrl -UseBasicParsing
# Should return 200 with &amp;quot;Hello from the secure backend!&amp;quot; in the HTML
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Test 2: Backend is NOT accessible from the internet&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;Invoke-WebRequest -Uri &amp;quot;$backendUrl/api/greeting&amp;quot; -UseBasicParsing
# Should return 403 Forbidden
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The test script (&lt;code&gt;deploy/test.ps1&lt;/code&gt;) automates both checks.&lt;/p&gt;
&lt;h2 id="cost-breakdown"&gt;Cost Breakdown&lt;/h2&gt;
&lt;p&gt;Here's what this setup costs beyond the App Service Plan itself:&lt;/p&gt;
&lt;table class="md-table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Monthly Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Virtual Network&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VNet Integration&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private Endpoint&lt;/td&gt;
&lt;td&gt;~$7.30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private DNS Zone&lt;/td&gt;
&lt;td&gt;~$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS queries&lt;/td&gt;
&lt;td&gt;~$0.40 per million&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total networking overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$8/month&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Compare that to alternatives like Application Gateway (&lt;sub&gt;$200/month), APIM (&lt;/sub&gt;$300/month), or an App Service Environment (~$1,000/month). For simple back-end lockdown scenarios, Private Endpoints are by far the most cost-effective option.&lt;/p&gt;
&lt;h2 id="cleaning-up"&gt;Cleaning Up&lt;/h2&gt;
&lt;p&gt;Since everything is in a single resource group, cleanup is one command:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-powershell"&gt;az group delete -n SecureBackendDemo --yes --no-wait
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Private Endpoints have replaced Service Endpoints as the recommended way to secure back-end App Services. The setup is more straightforward (especially with Bicep), the security is stronger (true private IP, data exfiltration protection), and the cost is minimal (~$8/month). If you're still using the Service Endpoints approach from my 2019 post, it's worth upgrading.&lt;/p&gt;
&lt;p&gt;The complete source code for this demo — including the .NET apps, Bicep template, and deployment scripts — is &lt;a href="https://github.com/markheath/securing-backend-appservices-private-endpoints"&gt;available on GitHub&lt;/a&gt;.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Azure</category>
  <category>App Service</category>
  <category>Azure CLI</category>
  <category>Bicep</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/3/31/securing-backend-appservices-private-endpoints</guid>
  <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Does Code Quality Still Matter in the Age of AI-Assisted Coding?</title>
  <link>https://markheath.net/post/2026/3/30/does-code-quality-still-matter</link>
  <description>&lt;p&gt;I'm increasingly hearing the sentiment that now AI models can write code for us, we no longer need to concern ourselves with concepts like &amp;quot;clean code&amp;quot;, eliminating code smells, following SOLID principles etc. All of these concerns, it's argued, are purely an attempt to make the codebase more comprehensible for &lt;em&gt;humans&lt;/em&gt;. But if humans are no longer reading the code, what does it matter? The only thing we should care about is whether the code &lt;em&gt;works&lt;/em&gt; correctly or not.&lt;/p&gt;
&lt;p&gt;I can partially understand this perspective. One great strength of AI agents is that they never tire. You can ask them to work on a &amp;quot;&lt;a href="https://www.geeksforgeeks.org/system-design/big-ball-of-mud-anti-pattern/"&gt;big ball of mud&lt;/a&gt;&amp;quot; and they won't complain. They don't mind if it's a giant convoluted monolith or an over-engineered set of microservices spread across multiple repos. They will just keep searching around in the code until they eventually find the bit they need to change.&lt;/p&gt;
&lt;p&gt;However, I think that this is a mistake - even if we grant that we don't need code to be &amp;quot;human readable&amp;quot; any more (which I'm also not convinced of - I still find it very useful to check in on how an agent is going about tackling a particular problem). Let me give just a few quick reasons why following these &amp;quot;traditional&amp;quot; coding guidelines still matters.&lt;/p&gt;
&lt;h2 id="finding-the-right-place"&gt;Finding the right place&lt;/h2&gt;
&lt;p&gt;The first thing a coding agent needs to do when fixing a bug or adding a new feature, is to determine where in the codebase that change should be made. This involves searching, and if you look at the model's reasoning steps and tool calls you can see what it searches for (spoiler alert: it's mostly just grepping for words it thinks might be relevant).&lt;/p&gt;
&lt;p&gt;This has several implications. First, it means that if our naming is weird or inconsistent, it will require more attempts to find the right place, slowing your agent's progress considerably.&lt;/p&gt;
&lt;p&gt;Second, it means that it is quite possible that it will miss some relevant portions of the codebase. The &amp;quot;&lt;a href="https://en.wikipedia.org/wiki/Shotgun_surgery"&gt;shotgun surgery&lt;/a&gt;&amp;quot; antipattern is where you need to modify many different files to implement a single feature. It's often the result of copy and pasted code, or just poor architectural decisions that don't organize key responsibilities or cross-cutting concerns into a single place. When you have code like this, the chances of your agent successfully finding all the places that need to be modified are greatly diminished.&lt;/p&gt;
&lt;p&gt;Then, there's a context window size problem. In an ideal world, the agent reads the entire codebase in one go and can reason about the whole thing as a unified whole. But that's simply not how they work at the moment, partly because the context windows aren't large enough (despite some recent models having a 1M token context window), and partly because the quality of model output tends to degrade, the longer your session grows.&lt;/p&gt;
&lt;p&gt;This means, for example, that following the &amp;quot;Single Responsibility Principle&amp;quot; will greatly help the model. Once it's found the single class that is relevant to the task at hand, it can read it all, without polluting the context window with lots of additional code that's irrelevant to the task at hand.&lt;/p&gt;
&lt;p&gt;So a well-organized, modular codebase, with well-named functions and classes is going to greatly enhance the effectiveness of an AI agent working on that project, increasing its chances of quickly finding the right place to edit.&lt;/p&gt;
&lt;p&gt;The cost aspect of this should not be underestimated. These agents can quickly burn through very large amounts of tokens, and it does seem that many of the subscription models are unsustainably subsidised at the moment.&lt;/p&gt;
&lt;p&gt;This means that in the (perhaps very near) future, we'll all be thinking a lot harder about how to make our agents read less code and perform fewer tool calls. The fact that each agent session starts out fresh means that it often has to spend time re-learning things it previously discovered. Already we are seeing many projects designed to address this problem (e.g. I just stumbled across &lt;a href="https://github.com/theDakshJaitly/mex"&gt;this one&lt;/a&gt; today)&lt;/p&gt;
&lt;h2 id="its-not-just-the-how-but-the-what-and-why"&gt;It's not just the how but the what and why&lt;/h2&gt;
&lt;p&gt;Code is instructions to the computer about what it should do. It expresses the &amp;quot;how&amp;quot; but not &amp;quot;what&amp;quot; or &amp;quot;why&amp;quot;. That's why good class and method names and code comments are important. They provide valuable additional context to the human reading it so they can understand the &lt;em&gt;intent&lt;/em&gt; of the code. This contextual information is just as relevant to agents who need to make connections between the natural language instructions that you provide them, and the concepts found in the codebase.&lt;/p&gt;
&lt;h2 id="the-best-way-vs-the-quickest-way"&gt;The best way vs the quickest way&lt;/h2&gt;
&lt;p&gt;AI agents are very goal-oriented. Ask them to fix a bug or to add a feature and they will find a way to do it. Unless you explicitly instruct them to, they won't push back on the request, or propose alternative better strategies.&lt;/p&gt;
&lt;p&gt;When a human developer is fixing a bug, they will often take a step back and ask whether this bug is actually an example of a wider category of problems. So we might actually &lt;em&gt;increase&lt;/em&gt; the scope of the task at hand in order to prevent many similar issues in the future.&lt;/p&gt;
&lt;p&gt;I'm increasingly seeing the idea that we could set up an automated process whereby every time an issue is raised on your GitHub repo, an agent triages it, attempts to fix it, creates and merges the PR. This is of course incredibly appealing - imagine if 90% of bugs were just automatically fixed within hours of being reported.&lt;/p&gt;
&lt;p&gt;But unless this &amp;quot;bigger picture&amp;quot; thinking can also be baked into the fixing process, this approach could result in the classic &amp;quot;technical debt&amp;quot; problem where every issue is resolved in the &amp;quot;quickest way&amp;quot; without regard to the longer-term maintainability implications.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;Code quality still matters for any codebase that you plan to improve and maintain long-term. Even if humans don't have to suffer the pain of reading poorly architected codebases, the effectiveness of AI agents can be significantly hindered by allowing structure to degrade. Investing in code quality (even if its just instructing the agents to do some rounds of cleanup and improvements after each task) will provide a stronger foundation for future development.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>SOLID</category>
  <category>Code Smells</category>
  <category>Clean Code</category>
  <category>Technical Debt</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/3/30/does-code-quality-still-matter</guid>
  <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Protecting Against Concurrent Updates in Azure Blob Storage with ETags</title>
  <link>https://markheath.net/post/2026/2/9/azure-blob-storage-etag-concurrency</link>
  <description>&lt;p&gt;I recently had to deal with a situation where there was potential for multiple processes to attempt to modify the same Azure blob at the same time.&lt;/p&gt;
&lt;p&gt;By default, if two processes read the same Azure blob and then both try to write updated content back, one of them will silently overwrite the other's changes. Fortunately, Azure Blob Storage provides a built-in mechanism to prevent this, called ETags. An ETag is simply a version token that changes every time a blob is modified. By passing the ETag you read back as a condition on your write, you can tell Azure to &amp;quot;only accept this update if nobody else has changed the blob since I last read it.&amp;quot; If someone else got there first, Azure returns a &lt;code&gt;412 Precondition Failed&lt;/code&gt; and you can retry with fresh data.&lt;/p&gt;
&lt;p&gt;Let's take a look at how to implement an optimistic concurrency pattern using ETags in C#.&lt;/p&gt;
&lt;h2 id="setting-up-the-clients"&gt;Setting Up the Clients&lt;/h2&gt;
&lt;p&gt;First, let's get connected to the storage account using the convenient &lt;code&gt;DefaultAzureCredential&lt;/code&gt; to avoid hard-coding any keys.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;var serviceUri = new Uri(&amp;quot;https://youraccountname.blob.core.windows.net/&amp;quot;);
var credential = new DefaultAzureCredential();
var blobServiceClient = new BlobServiceClient(serviceUri, credential);
var containerClient = blobServiceClient.GetBlobContainerClient(&amp;quot;your-container&amp;quot;);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You'll need to reference the &lt;code&gt;Azure.Identity&lt;/code&gt; and &lt;code&gt;Azure.Storage.Blobs&lt;/code&gt; NuGet packages.&lt;/p&gt;
&lt;h2 id="fetching-the-blob-and-its-etag"&gt;Fetching the Blob and Its ETag&lt;/h2&gt;
&lt;p&gt;The crucial step is to retrieve the ETag alongside the blob content. Here I've made a simple helper called &lt;code&gt;DownloadContentAsync&lt;/code&gt; that returns both in one call:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task&amp;lt;(string, ETag)&amp;gt; FetchContentsAsync(BlobClient blobClient)
{
    try
    {
        var content = await blobClient.DownloadContentAsync();
        return (content.Value.Content.ToString(), content.Value.Details.ETag);
    }
    catch (RequestFailedException ex) when (ex.Status == 404)
    {
        // Blob doesn't exist yet; return empty content and a default (empty) ETag
        return (string.Empty, default);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="writing-back-with-an-etag-condition"&gt;Writing Back with an ETag Condition&lt;/h2&gt;
&lt;p&gt;Now that we've retrieved the existing blob contents, let's imagine that we've updated them and now we want to re-upload.&lt;/p&gt;
&lt;p&gt;Before uploading, we need set a &lt;code&gt;BlobRequestConditions&lt;/code&gt; on the upload options. There are two cases:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Blob didn't exist&lt;/strong&gt; (&lt;code&gt;etag == default&lt;/code&gt;): use &lt;code&gt;IfNoneMatch = new ETag(&amp;quot;*&amp;quot;)&lt;/code&gt; so the upload only succeeds if the blob still doesn't exist.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Blob already exists&lt;/strong&gt;: use &lt;code&gt;IfMatch = etag&lt;/code&gt; so the upload only succeeds if the blob's current ETag still matches the one we read.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If the condition fails, Azure returns &lt;code&gt;412 Precondition Failed&lt;/code&gt; and the SDK throws a &lt;code&gt;RequestFailedException&lt;/code&gt;. We catch that and return &lt;code&gt;false&lt;/code&gt; to signal a conflict.&lt;/p&gt;
&lt;p&gt;Again I've created a simple helper method &lt;code&gt;UpdateContentsAsync&lt;/code&gt; to show how we can do this and detect the concurrency issue.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task&amp;lt;bool&amp;gt; UpdateContentsAsync(BlobClient blobClient, string contents, ETag etag)
{
    var uploadOptions = new BlobUploadOptions
    {
        Conditions = etag == default
            // Blob didn't exist: only create if still absent
            ? new BlobRequestConditions { IfNoneMatch = new ETag(&amp;quot;*&amp;quot;) }
            // Blob existed: only overwrite if ETag still matches
            : new BlobRequestConditions { IfMatch = etag }
    };

    try
    {
        await blobClient.UploadAsync(BinaryData.FromString(contents), uploadOptions);
        return true;
    }
    catch (RequestFailedException ex) when (ex.Status == 412 || ex.ErrorCode == BlobErrorCode.ConditionNotMet)
    {
        // Another writer changed the blob between our read and write
        return false;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="retrying-with-exponential-backoff"&gt;Retrying with Exponential Backoff&lt;/h2&gt;
&lt;p&gt;A conflict just means someone else updated the blob first, so we don't need to give up. Instead we can just fetch the latest version and try again. To avoid a situation of too many processes all trying to update at the same time, we can back off exponentially and add a small random jitter to each delay.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;async Task ModifyBlob(
    BlobContainerClient container,
    string blobName,
    Func&amp;lt;string, Task&amp;lt;string&amp;gt;&amp;gt; transform,
    CancellationToken ct)
{
    ArgumentNullException.ThrowIfNull(transform);

    var maxRetries = 5;
    var attempt = 0;
    var delay = TimeSpan.FromSeconds(2);
    var blobClient = container.GetBlobClient(blobName);

    while (attempt &amp;lt; maxRetries)
    {
        ct.ThrowIfCancellationRequested();
        attempt++;

        var (contents, etag) = await FetchContentsAsync(blobClient);
        var newContents = await transform(contents);

        if (await UpdateContentsAsync(blobClient, newContents, etag))
            return; // success

        // Back off before retrying
        var jitterMs = Random.Shared.Next(0, 100);
        await Task.Delay(delay + TimeSpan.FromMilliseconds(jitterMs), ct);

        // Exponential backoff, capped at 5 seconds
        delay = TimeSpan.FromMilliseconds(Math.Min(delay.TotalMilliseconds * 2, 5_000));
    }

    throw new InvalidOperationException(
        $&amp;quot;Failed to update blob '{blobName}' after {maxRetries} attempts.&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;transform&lt;/code&gt; delegate receives the current blob content and returns the new content. &lt;code&gt;ModifyBlob&lt;/code&gt; handles all the retry logic so callers don't need to think about ETags at all.&lt;/p&gt;
&lt;h2 id="seeing-concurrency-conflicts-in-action"&gt;Seeing Concurrency Conflicts in Action&lt;/h2&gt;
&lt;p&gt;To check this actually works we can make a simple modification to simulate two concurrent updaters. The outer transform, before writing its own change, triggers an inner call to modify the blob that successfully commits first. When control returns to the outer call its ETag is now stale, so the first attempt fails and the retry loop kicks in.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;bool firstTime = true;

await ModifyBlobTest(containerClient, blobName, async currentContent =&amp;gt;
{
    if (firstTime)
    {
        // While the outer call holds its ETag, the inner call commits a change,
        // invalidating the outer ETag.
        await ModifyBlobTest(
            containerClient, blobName,
            c =&amp;gt; Task.FromResult($&amp;quot;{c}\r\nInner update {DateTimeOffset.Now}&amp;quot;),
            CancellationToken.None);
    }
    firstTime = false;
    return $&amp;quot;{currentContent}\r\nOuter conflicting update {DateTimeOffset.Now}&amp;quot;;
}, CancellationToken.None);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;On the first pass through the outer loop you'll see output like:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-txt"&gt;Successful update of etag &amp;quot;0x1234...&amp;quot;   ← inner update wins
Concurrency conflict detected. Old ETag: &amp;quot;0x1234...&amp;quot;  ← outer detects stale ETag
Successful update of etag &amp;quot;0x5678...&amp;quot;   ← outer retries and succeeds
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Both updates end up in the blob — neither is lost.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;ETags give you a simple optimistic concurrency mechanism for Azure Blob Storage: read the blob and its ETag together, apply your changes, then write back with a condition that fails if the blob has been modified in the meantime. If you wrap that in a retry loop with exponential backoff and jitter, you have a robust pattern that handles any number of concurrent writers without data loss or locks.&lt;/p&gt;
&lt;p&gt;Obviously in an ideal world you wouldn't be making lots of concurrent updates to blobs, but if you do, you can use the approach shown in the &lt;code&gt;ModifyBlob&lt;/code&gt; helper shown above.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Azure</category>
  <category>.NET</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/2/9/azure-blob-storage-etag-concurrency</guid>
  <pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>EF Core Lazy Loading Performance Gotcha</title>
  <link>https://markheath.net/post/2026/1/8/efcore-lazy-loader-gotcha</link>
  <description>&lt;p&gt;I was recently using EF Core's &lt;code&gt;ILazyLoader&lt;/code&gt; for &lt;a href="https://learn.microsoft.com/en-us/ef/core/querying/related-data/lazy#lazy-loading-without-proxies"&gt;lazy loading without proxies&lt;/a&gt;, and ran into a performance issue that took me by surprise. When you call &lt;code&gt;DbSet&amp;lt;T&amp;gt;.Add()&lt;/code&gt; to add an entity to the context, EF Core immediately injects the lazy loader into your entity even before you've called &lt;code&gt;SaveChangesAsync()&lt;/code&gt;. This means if you navigate to a lazy-loaded navigation property before persisting, EF Core will try to query the database for related entities that don't exist yet.&lt;/p&gt;
&lt;p&gt;It's an unnecessary performance overhead and the fix is fortunately very simple: don't add entities to the DbContext until right before you're ready to call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="the-model"&gt;The Model&lt;/h2&gt;
&lt;p&gt;To understand how it behaves I created a simple example project using a &lt;code&gt;Blog&lt;/code&gt; and &lt;code&gt;Post&lt;/code&gt; relationship with &lt;code&gt;ILazyLoader&lt;/code&gt; injection:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;public class Blog
{
    private ICollection&amp;lt;Post&amp;gt;? _posts;
    private ILazyLoader? _lazyLoader;

    public Blog() {}

    public Blog(ILazyLoader lazyLoader)
    {
        _lazyLoader = lazyLoader;
    }

    public int Id { get; set; }
    public required string Name { get; set; }
    
    public virtual ICollection&amp;lt;Post&amp;gt; Posts
    {
        get =&amp;gt; _lazyLoader?.Load(this, ref _posts) ?? _posts ?? [];
        set =&amp;gt; _posts = value;
    }
}

public class Post
{
    public int Id { get; set; }
    public required string Title { get; set; }
    public required string Content { get; set; }
    public virtual Blog? Blog { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="reproducing-the-problem"&gt;Reproducing The Problem&lt;/h2&gt;
&lt;p&gt;Now let's look at what happens when you add a blog with posts, but navigate into the &lt;code&gt;Posts&lt;/code&gt; collection before persisting to the database:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;using (var context = new BloggingContext())
{
    await context.Database.EnsureCreatedAsync();

    // Create a new Blog with two Posts
    var blog = new Blog
    {
        Name = &amp;quot;Test Blog&amp;quot;,
        Posts =
        [
            new Post { Title = &amp;quot;First Post&amp;quot;, Content = &amp;quot;Hello from EF Core 10!&amp;quot; },
            new Post { Title = &amp;quot;Second Post&amp;quot;, Content = &amp;quot;Another post for testing.&amp;quot; }
        ]
    };

    // This causes EF Core to inject the lazy loader using reflection
    context.Blogs.Add(blog);

    // Accessing blog.Posts triggers the lazy loader to query the database
    // even though this blog hasn't been saved yet!
    Console.WriteLine(&amp;quot;Number of posts: &amp;quot; + blog.Posts.Count);

    await context.SaveChangesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you call &lt;code&gt;context.Blogs.Add(blog)&lt;/code&gt;, EF Core uses reflection to inject an &lt;code&gt;ILazyLoader&lt;/code&gt; instance into the &lt;code&gt;Blog&lt;/code&gt; object. From that point on, any access to &lt;code&gt;blog.Posts&lt;/code&gt; will trigger the lazy loading mechanism. Since the blog doesn't exist in the database yet (no &lt;code&gt;Id&lt;/code&gt; has been assigned), EF Core will execute a query that looks something like:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-sql"&gt;SELECT [p].[Id], [p].[BlogId], [p].[Content], [p].[Title]
FROM [Posts] AS [p]
WHERE [p].[BlogId] = 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is completely pointless - the blog hasn't been persisted, so there can't possibly be any related posts in the database.&lt;/p&gt;
&lt;h2 id="the-solution"&gt;The Solution&lt;/h2&gt;
&lt;p&gt;The fix is straightforward: only add the entity to the context right before you call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-csharp"&gt;using (var context = new BloggingContext())
{
    await context.Database.EnsureCreatedAsync();

    var blog = new Blog
    {
        Name = &amp;quot;Test Blog&amp;quot;,
        Posts =
        [
            new Post { Title = &amp;quot;First Post&amp;quot;, Content = &amp;quot;Hello from EF Core 10!&amp;quot; },
            new Post { Title = &amp;quot;Second Post&amp;quot;, Content = &amp;quot;Another post for testing.&amp;quot; }
        ]
    };

    // Do all your work with the blog object first
    Console.WriteLine(&amp;quot;Number of posts: &amp;quot; + blog.Posts.Count);

    // Only add to context when you're ready to save
    context.Blogs.Add(blog);
    await context.SaveChangesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now when you access &lt;code&gt;blog.Posts&lt;/code&gt;, there's no lazy loader injected yet, so it just returns the collection you assigned, with no database query needed.&lt;/p&gt;
&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;
&lt;p&gt;If you're using &lt;code&gt;ILazyLoader&lt;/code&gt; injection in EF Core, be mindful of when you add entities to the &lt;code&gt;DbContext&lt;/code&gt;. The lazy loader gets injected as soon as you call &lt;code&gt;Add()&lt;/code&gt;, not when you call &lt;code&gt;SaveChangesAsync()&lt;/code&gt;. So if you need to work with navigation properties before persisting, keep the entity disconnected from the context until you're ready to save. This avoids unnecessary database queries.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Entity Framework Core</category>
  <category>.NET</category>
  <guid isPermaLink="false">https://markheath.net/post/2026/1/8/efcore-lazy-loader-gotcha</guid>
  <pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate>
</item>
<item>
  <title>2025 Year in Review</title>
  <link>https://markheath.net/post/2025/12/31/2025-year-in-review</link>
  <description>&lt;p&gt;Happy Christmas and happy new year! I know it's been a while since I last posted anything here, but thought I'd revive my tradition of writing another &lt;a href="https://markheath.net/category/year%20in%20review"&gt;year in review&lt;/a&gt; post.&lt;/p&gt;
&lt;h3 id="pluralsight"&gt;Pluralsight&lt;/h3&gt;
&lt;p&gt;Part of the reason for me not having as much time for blogging is that I created three new Pluralsight courses this year, &lt;a href="https://www.pluralsight.com/authors/mark-heath"&gt;bringing my total to 29&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First up was &lt;a href="https://www.pluralsight.com/courses/refactor-optimize-code-github-copilot"&gt;a course about refactoring and optimizing code with GitHub Copilot&lt;/a&gt;. Obviously 2025 has been the year where AI as firmly established itself as a day-to-day part of the developer experience, and it's an extremely fast-moving space. AI-assisted coding can be both incredibly impressive and incredibly frustrating. Impressive as it can often write in a few seconds what it would have taken hours or even days to write manually, but frustrating as it can often miss the point of what you're asking or make critical mistakes that cost you almost as much time as you saved. In my course I tried to focus on the basics of how to prompt the AI assistant well, to enable you to get as much benefit out of it as possible, without falling into the pitfall of losing control of your codebase and ending up with a vibe-coded mess.&lt;/p&gt;
&lt;p&gt;Next up was two courses about microservices, which essentially replace and update my earlier Pluralsight courses on the same topic. Despite some &lt;a href="https://markheath.net/post/2025/2/24/microservices-pushback"&gt;recent pushback against microservices&lt;/a&gt; in the industry, it remains a valuable and important architectural approach, and the core principles of microservices are relevant whenever you're building a distributed application (which for a lot of us, is all the time).&lt;/p&gt;
&lt;p&gt;The first microservice course was &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt;, which covers some of the key principles for designing scalable and modular microservice architectures and explores the value of service meshes and continuous delivery pipelines. And the second was &lt;a href="https://www.pluralsight.com/courses/microservices-building-testing"&gt;Microservices: Building and Testing&lt;/a&gt; which focuses in more detail on topics like implementing the domain logic, as well as how to test and deploy microservices.&lt;/p&gt;
&lt;h3 id="carpal-tunnel-surgery"&gt;Carpal Tunnel Surgery&lt;/h3&gt;
&lt;p&gt;Another reason for my reduced blogging output this year was some health issues. I've been battling back pain for a few years, although this year a strict regime of daily stretches and exercises and much more use of a standing desk seems to have helped a lot, and I'm a lot better than I was. For any younger developers reading this, make sure you look after your back - it's frustratingly slow to recover once you've injured it!&lt;/p&gt;
&lt;p&gt;I've also been having a lot of issues with hand numbness and finally had carpal tunnel surgery on my left hand (which was my worst) midway through the year. I was quite apprehensive about whether it would impact or even eliminate my ability to play guitar but I'm pleased to report that my strength and flexibility returned enough after a couple of months to continue playing as before. Thankfully my right hand isn't as bad, so I'm not in a rush to get that one done yet.&lt;/p&gt;
&lt;h3 id="music-and-audio"&gt;Music and Audio&lt;/h3&gt;
&lt;p&gt;As you may know, one of my favourite hobbies is playing and recording music, and this year even with a break for carpal tunnel surgery I managed to play guitar or piano live at 32 events, as well as participated in recording a live album which was a first for me.&lt;/p&gt;
&lt;p&gt;I also continued my tradition of composing and writing one instrumental song a month (which I occasionally batch up into albums that you can find here on &lt;a href="https://open.spotify.com/artist/4036iD5XfdOJvs4MNVZlSY"&gt;Spotify&lt;/a&gt; or &lt;a href="https://markheath.bandcamp.com/"&gt;Bandcamp&lt;/a&gt; or just listen to them as they come out on &lt;a href="https://www.youtube.com/@mark_heath"&gt;YouTube&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;This Christmas I upgraded my long-serving Yamaha MODX 7 keyboard to the newer &lt;a href="https://usa.yamaha.com/products/music_production/synthesizers/modxm/index.html"&gt;Yamaha MODX M7&lt;/a&gt; which is a very nice upgrade with the new ANX audio engine, better AWM polyphony and an improved user interface. It's also interesting to me that we are seeing an increasing number of hardware synthesizers providing fully software versions of their sounds, meaning that you can much more easily transition between studio and live playing using the same sounds (&lt;a href="https://www.arturia.com/products/hardware-synths/astrolab/astrolab-37"&gt;Arturia's Astrolab series&lt;/a&gt; also does this).&lt;/p&gt;
&lt;p&gt;In terms of guitar tech, I'm still very happy with my &lt;a href="https://line6.com/helix/helix-lt.html"&gt;Line 6 Helix LT&lt;/a&gt; and &lt;a href="https://www.ikmultimedia.com/products/tonexpedal/?pkey=tonex-pedal"&gt;IK Multimedia TONEX&lt;/a&gt;, which between them give me access to a very wide variety of tones and effects. Again it's a very fast-moving space, with many exciting new software and hardware products being released and we're also seeing machine learning take a much more prominent role in music production (a trend I expect to increase in 2026).&lt;/p&gt;
&lt;h3 id="ai.net-and-azure"&gt;AI, .NET and Azure&lt;/h3&gt;
&lt;p&gt;My day job continues to revolve mostly around .NET and Azure, as well as increasingly incorporating various AI technologies (both in the development process and to power new functionality).&lt;/p&gt;
&lt;p&gt;My work with Azure this year has been a lot less on learning about new services, and more on how to deliver excellent resilience, scalability, and performance. I hope to feed a lot of the lessons I've learned into upcoming Pluralsight courses and talks.&lt;/p&gt;
&lt;p&gt;I'm also hoping to find more time this year to go deeper with Azure Container Apps and Dapr which both have a lot to offer to simplify the process of building and deploying microservices and distributed applications.&lt;/p&gt;
&lt;p&gt;It's great to see that each new version of .NET manages to squeeze out more performance improvements, and this has meant I have never regretted choosing .NET as my main development platform. (Still hoping for discriminated unions in C# though!)&lt;/p&gt;
&lt;p&gt;Of course, there was also a lot of AI this year. I am both an AI enthusiast and an AI skeptic - it has potential to be very helpful but also very harmful. A key skill for all developers is knowing when and how to use it effectively.&lt;/p&gt;
&lt;p&gt;I did attempt the &lt;a href="https://adventofcode.com/2025"&gt;Advent of Code&lt;/a&gt; challenges again this year, forcing myself to do them without the help of AI. Sadly didn't manage to complete all the challenges due to time constraints, so I'd like to cycle back to the two I missed if I get a chance later in the year.&lt;/p&gt;
&lt;h3 id="whats-next"&gt;What's next?&lt;/h3&gt;
&lt;p&gt;As for what's in store for next year, there's a good chance that I'll be creating one or two additional Pluralsight courses, although that's not been confirmed yet.&lt;/p&gt;
&lt;p&gt;I took a break from speaking at conferences when my back was at its worst, and haven't currently got any new talks planned, but maybe if things continue to go well this year I might consider taking that up again.&lt;/p&gt;
&lt;p&gt;And I think this might be my final year as Microsoft MVP as I have not been able to contribute as much as I have in previous years. It's been a great privilege to be part of the MVP program for nearly 10 years now, so I'll take the opportunity to say a big thank you to the MVP organizers and the other MVPs for all they do to ensure that .NET developers get access to great learning resources.&lt;/p&gt;
&lt;p&gt;Once again, a big thank you to everyone who has read this blog or watched my Pluralsight courses. I hope you've found them helpful and thanks for all the encouraging feedback.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Year in review</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/12/31/2025-year-in-review</guid>
  <pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Are Microservices Becoming Easier?</title>
  <link>https://markheath.net/post/2025/7/10/microservices-architectural-strategies-techniques</link>
  <description>&lt;p&gt;I've been a bit quiet on this blog recently mainly because I've been busy working on a new Pluralsight course &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt;, which essentially replaces my previous &lt;a href="https://www.pluralsight.com/courses/microservices-fundamentals"&gt;Microservices Fundamentals&lt;/a&gt; course, although they cover slightly different topics. In this course, I wanted to make sure I addressed some of the &lt;a href="https://markheath.net/post/2025/2/24/microservices-pushback"&gt;&amp;quot;pushback&amp;quot; against microservices&lt;/a&gt;, as it's fair to say that there has been some legitimate questions asked about whether microservices are being applied to problems where they don't actually help.&lt;/p&gt;
&lt;p&gt;However despite their challenges, I do think there are situations in which microservices can make a lot of sense. And that's because when a software product becomes large enough, with many teams of developers, and many user-facing websites or applications, and many APIs, then it's inevitable that it becomes a distributed system.&lt;/p&gt;
&lt;p&gt;In some ways you can think of microservices as simply a more disciplined approach to distributed systems, where you take care to ensure that each service is &lt;strong&gt;independently deployable&lt;/strong&gt;. This helps you avoid the pitfall of building a &amp;quot;distributed monolith&amp;quot; - an architecture famous for combining the worst aspects of both monoliths and distributed systems.&lt;/p&gt;
&lt;p&gt;In fact, the majority of the tools, techniques and strategies I discuss in the course are not strictly specific to microservices. That's because most of the key concerns about observability, security, scalability, testability, automated deployment are things that you'll need in a distributed system regardless of whether you are explicitly trying to create &amp;quot;microservices&amp;quot;.&lt;/p&gt;
&lt;h3 id="are-microservices-becoming-easier"&gt;Are Microservices Becoming Easier?&lt;/h3&gt;
&lt;p&gt;One of the hopes in the early days of microservices was that over time, we'd develop tooling that helped us overcome many of the challenges of building, testing, and deploying distributed systems.&lt;/p&gt;
&lt;p&gt;In some ways that is true. For example, Kubernetes is incredibly powerful and flexible and has established itself as the de-facto standard for hosting microservices. However, I certainly wouldn't describe it as simple to learn and manage. But we are seeing the emergence of simplified microservices hosting platforms, such as &lt;a href="https://learn.microsoft.com/en-us/azure/container-apps/overview"&gt;Azure Container Apps&lt;/a&gt; which is built on top of Kubernetes, but takes away a lot of the complexity and streamlines the process of hosting your microservices.&lt;/p&gt;
&lt;p&gt;Another favourite toolkit of mine for building microservices is &lt;a href="https://dapr.io/"&gt;Dapr&lt;/a&gt;, which offers a set of &amp;quot;building blocks&amp;quot;, to enable you to build secure and reliable microservices. I've actually created a &lt;a href="https://app.pluralsight.com/library/courses/dapr-1-fundamentals"&gt;Dapr Fundamentals&lt;/a&gt; Pluralsight course. The way Dapr delivers these capabilities is by exposing APIs from a sidecar container. This approach has the benefit of making Dapr programming language agnostic, and cloud-agnostic as the building blocks each offer a variety of backing services to implement the capability, giving you a lot of freedom to use the languages and services you are familiar with.&lt;/p&gt;
&lt;p&gt;In the .NET world, &lt;a href="https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview"&gt;.NET Aspire&lt;/a&gt; aims to improve the experience of building microservices by providing various tools, templates and packages that especially enhance the local development experience. So it does feel like things are moving in the right direction towards simplifying the overall microservices experience.&lt;/p&gt;
&lt;p&gt;And in my Pluralsight course I also wanted to include a brief section exploring the ways in which AI is able to streamline the experience of building, deploying and managing microservice applications. A lot of the pain points of microservices revolve around the complexities of managing a system made up of so many interconnected parts. It's still early days for AI, but I am hopeful that it could make a big difference especially in the area of monitoring and troubleshooting distributed systems.&lt;/p&gt;
&lt;h3 id="summary"&gt;Summary&lt;/h3&gt;
&lt;p&gt;Microservices remain an valuable architectural pattern, despite the potential troubles you can run into with them. Generally, my architectural preference is to keep things as simple as possible, and only reach for more advanced patterns and tools when you have proved that you really need them. So most of the tools and techniques I show in the course are not so much a prescription of what you should do, as suggestions for things you might reach for if you're experiencing the problems they're designed to solve. If you're a Pluralsight subscriber, why not check out my &lt;a href="https://www.pluralsight.com/courses/microservices-architectural-strategies-techniques"&gt;Microservices: Architectural Strategies and Techniques&lt;/a&gt; course, and as always I'm very interested in learning from other people's experiences so do feel free to get in touch via the comments.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>Microservices</category>
  <category>Pluralsight</category>
  <category>dapr</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/7/10/microservices-architectural-strategies-techniques</guid>
  <pubDate>Thu, 10 Jul 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Calling MCP Servers in C# with Microsoft.Extensions.AI</title>
  <link>https://markheath.net/post/2025/4/14/calling-mcp-server-microsoft-extensions-ai</link>
  <description>&lt;p&gt;I posted recently about how to allow &lt;a href="https://markheath.net/post/2025/1/18/using-tools-safely-with-llms"&gt;LLMs to call tools&lt;/a&gt; using the Microsoft.Extensions.AI NuGet package in C#.&lt;/p&gt;
&lt;p&gt;Obviously, a common usage scenario would be to expose MCP servers as tools for your LLM to call. Thankfully, the new &lt;a href="https://www.nuget.org/packages/ModelContextProtocol"&gt;ModelContextProtocol NuGet package&lt;/a&gt; makes this straightforward.&lt;/p&gt;
&lt;p&gt;Note: This package is still in pre-release (as is Microsoft.Extensions.AI), so do check the release notes for any breaking changes to the API.&lt;/p&gt;
&lt;p&gt;I've updated my &lt;a href="https://github.com/markheath/open-ai-test1/"&gt;demo application&lt;/a&gt; to support calling MCP tools, following the techniques demonstrated in Microsoft's &lt;a href="https://github.com/modelcontextprotocol/csharp-sdk/blob/main/samples/ChatWithTools"&gt;Chat With Tools&lt;/a&gt; sample.&lt;/p&gt;
&lt;p&gt;The first step is simply to reference the ModelContextProtocol NuGet package. I had to also update the &lt;a href="https://www.nuget.org/packages/Microsoft.Extensions.AI"&gt;Microsoft.Extensions.AI&lt;/a&gt; versions as well. Here's the versions I used for my test:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-xml"&gt;&amp;lt;PackageReference Include=&amp;quot;Microsoft.Extensions.AI&amp;quot; Version=&amp;quot;9.4.0-preview.1.25207.5&amp;quot; /&amp;gt;
&amp;lt;PackageReference Include=&amp;quot;Microsoft.Extensions.AI.OpenAI&amp;quot; Version=&amp;quot;9.4.0-preview.1.25207.5&amp;quot; /&amp;gt;
&amp;lt;PackageReference Include=&amp;quot;ModelContextProtocol&amp;quot; Version=&amp;quot;0.1.0-preview.8&amp;quot; /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The next step is simply to connect to an MCP server. You can do this with the &lt;code&gt;McpClientFactory&lt;/code&gt;. Here, we're using &lt;code&gt;npx&lt;/code&gt; (which comes with Node) to run a simple example MCP server called the &lt;a href="https://www.npmjs.com/package/@modelcontextprotocol/server-everything"&gt;&amp;quot;Everything&amp;quot; server&lt;/a&gt; as it demonstrates the range of capabilities of an MCP server.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var mcpClient = await McpClientFactory.CreateAsync(
    new StdioClientTransport(new()
    {
        Command = &amp;quot;npx&amp;quot;,
        Arguments = [&amp;quot;-y&amp;quot;, &amp;quot;--verbose&amp;quot;, &amp;quot;@modelcontextprotocol/server-everything&amp;quot;],
        Name = &amp;quot;Everything&amp;quot;,
    }));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can then use the MCP client to list the available tools:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var tools = await mcpClient.ListToolsAsync();
Console.WriteLine(&amp;quot;Available tools:&amp;quot;);
foreach (var tool in tools)
{
    Console.WriteLine($&amp;quot;  {tool.Name}: {tool.Description}&amp;quot;);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These tools are instances of &lt;code&gt;McpClientTool&lt;/code&gt;, which inherits from &lt;code&gt;AIFunction&lt;/code&gt;, meaning that we can pass them directly in as tools to an instance of  &lt;code&gt;ChatOptions&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt;var chatOptions = new ChatOptions
{
    Tools = [..tools]
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And then using the tools is simply a case of passing those options into the call to &lt;code&gt;GetStreamingResponseAsync&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-cs"&gt; await foreach (var item in chatClient.GetStreamingResponseAsync(
        chatHistory, chatOptions))
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Although it's early days for the MCP protocol, it's very pleasing to see how easy it is to get your LLM calling tools provided by an MCP server. For the full code sample, showing how to get this working with Azure OpenAI service, check my &lt;a href="https://github.com/markheath/open-ai-test1/"&gt;demo repo here&lt;/a&gt;.&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>C#</category>
  <category>MCP</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/4/14/calling-mcp-server-microsoft-extensions-ai</guid>
  <pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Using an MCP Server in GitHub Copilot</title>
  <link>https://markheath.net/post/2025/4/10/mcp-playwright</link>
  <description>&lt;p&gt;&lt;a href="https://github.com/features/copilot"&gt;GitHub Copilot&lt;/a&gt; is continuing to evolve very rapidly, with the recent launch of &lt;a href="https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode"&gt;&amp;quot;agent mode&amp;quot;&lt;/a&gt;, and the ability to connect to &lt;a href="https://code.visualstudio.com/docs/copilot/chat/mcp-servers"&gt;&amp;quot;Model Context Protocol&amp;quot; servers&lt;/a&gt; which gives you access to a vast array of tools, essentially allowing your agent to access any data, and perform any actions you like.&lt;/p&gt;
&lt;p&gt;In this post, I'll walk you through the steps to configure Visual Studio Code to connect to an MCP Server and make use of it in agent mode.&lt;/p&gt;
&lt;p&gt;For our demo scenario, we'll use the &lt;a href="https://github.com/microsoft/playwright-mcp"&gt;Playwright MCP server&lt;/a&gt; which exposes the capabilities of the &lt;a href="https://playwright.dev/"&gt;Playwright&lt;/a&gt; end-to-end browser automation testing tool. This essentially gives your &amp;quot;agent&amp;quot; the ability to open a web browser and perform actions in there, which obviously opens up a lot of possibilities.&lt;/p&gt;
&lt;h3 id="configure-the-mcp-server-in-vs-code"&gt;Configure the MCP Server in VS Code&lt;/h3&gt;
&lt;p&gt;First of all, we do need to have &lt;a href="https://nodejs.org/"&gt;node installed&lt;/a&gt; as we need the &lt;code&gt;npx&lt;/code&gt; tool to run the Playwright server.&lt;/p&gt;
&lt;p&gt;Next, we need to configure Visual Studio Code to access the Playwright MCP Server. The &lt;a href="https://github.com/microsoft/playwright-mcp"&gt;README docs&lt;/a&gt; provide several ways to do this, the easiest being to just click on the &amp;quot;Install Server&amp;quot; button.&lt;/p&gt;
&lt;p&gt;Installing it will change your &lt;code&gt;settings.json&lt;/code&gt; file to add something like this. Note that I needed to add the &lt;code&gt;--browser msedge&lt;/code&gt; args as I don't have Chrome installed on my PC, which is the default browser for Playwright.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;&amp;quot;mcp&amp;quot;: {
    &amp;quot;servers&amp;quot;: {
        &amp;quot;playwright&amp;quot;: {
            &amp;quot;command&amp;quot;: &amp;quot;npx&amp;quot;,
            &amp;quot;args&amp;quot;: [
                &amp;quot;-y&amp;quot;,
                &amp;quot;@playwright/mcp@latest&amp;quot;,
                &amp;quot;--browser&amp;quot;,
                &amp;quot;msedge&amp;quot;
            ]
        }
    }
},
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You also do need to ensure that the the MCP server starts correctly. In theory VS Code should do this automatically, but you can use the &amp;quot;MCP: List Servers&amp;quot; command in VS Code to pick your server and explicitly start, stop or restart it.&lt;/p&gt;
&lt;h3 id="selecting-agent-mode-and-enabling-tools"&gt;Selecting agent mode and enabling tools&lt;/h3&gt;
&lt;p&gt;To try it out, you simply open the GitHub Copilot Chat window, and in the drop-down in the prompt box, select &amp;quot;Agent&amp;quot; mode. The prompt box will now include a button to let you select the tools that you want to allow the agent to use.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://markheath.net/posts/files/mcp-playwright-1.png" alt="Agent mode prompt" /&gt;&lt;/p&gt;
&lt;p&gt;The Playwright MCP server actually offers multiple tool actions such as navigating, clicking, typing, etc. So if you want to, you can restrict the agent to only be allowed to call specific tools.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://markheath.net/posts/files/mcp-playwright-2.png" alt="Playwright tool selection" /&gt;&lt;/p&gt;
&lt;h3 id="trying-it-out"&gt;Trying it out&lt;/h3&gt;
&lt;p&gt;Next, we simply need to ask the agent to do something that requires the use of the tool. For example, I prompted it with &lt;em&gt;&amp;quot;visit the hacker news home page and find all articles relating to audio or music&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If the AI model decides it wants to run a tool, it will pause, asking you for permission. This is good from a security perspective, although I can imagine people quickly getting tired of granting permission at a granular level, and just enabling their agents to do whatever they think is best.&lt;/p&gt;
&lt;p&gt;In this example, it's likely to ask to run the &lt;code&gt;browser_navigate&lt;/code&gt; command to go to the Hacker News homepage. In my case, it found two articles and decided to run &lt;code&gt;browser_navigate&lt;/code&gt; again on both of them so it could read these articles and summarise them.&lt;/p&gt;
&lt;p&gt;Of course, the agent might not do exactly what you wanted. I found that with this particular prompt, sometimes it assumed I wanted it to create an application that fetched the Hacker News homepage and printed out the titles of the articles about audio.&lt;/p&gt;
&lt;p&gt;So don't forget the retry button at the bottom of each response. You can always ask the model to try again (or try again with a different model) if you're not happy with the initial response.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://markheath.net/posts/files/mcp-playwright-3.png" alt="Retry button" /&gt;&lt;/p&gt;
&lt;h3 id="what-else-can-i-do-with-mcp"&gt;What else can I do with MCP?&lt;/h3&gt;
&lt;p&gt;The great thing about MCP is that it is extremely flexible. Already, hundreds of servers have been created (you can find a good &lt;a href="https://modelcontextprotocol.io/examples"&gt;list here&lt;/a&gt;). An obvious use-case would be to give the LLM access to data stored in various locations - such as your company's internal documentation.&lt;/p&gt;
&lt;p&gt;And you can also use them to trigger actions in other systems, whether that's posting an update onto a Slack channel, or by calling your own internal API to perform a custom business process.&lt;/p&gt;
&lt;p&gt;Already it seems like MCP has been accepted as the defacto standard to grant AI agents access to a wide variety of tools, and the ease with which an MCP server can be built means that just about any kind of integration you can think of is achievable.&lt;/p&gt;
&lt;p&gt;I suspect the next challenges in this area will be ensuring that the LLM is good at picking the right tool for the job (especially if you have many dozens of MCP servers, each with multiple commands), and providing adequate security mechanisms so that these tools (whether deliberately or not) don't cause significant damage (e.g. leaking sensitive data, spending all your money in the cloud, deleting all your stuff, etc).&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>Playwright</category>
  <category>GitHub Copilot</category>
  <category>MCP</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/4/10/mcp-playwright</guid>
  <pubDate>Thu, 10 Apr 2025 00:00:00 GMT</pubDate>
</item>
<item>
  <title>Refactoring and Optimizing Code with GitHub Copilot</title>
  <link>https://markheath.net/post/2025/4/1/refactor-optimize-code-github-copilot</link>
  <description>&lt;p&gt;I'm pleased to announce my latest Pluralsight course, &lt;a href="http://www.pluralsight.com/courses/refactor-optimize-code-github-copilot"&gt;Refactor and Optimize Code with GitHub Copilot&lt;/a&gt;, has just gone live. It will be part of a series of courses covering various aspects of using GitHub Copilot, and my course focuses specifically on using Copilot to refactor, modernize and optimize code.&lt;/p&gt;
&lt;p&gt;This is an important topic as many of us are spending a lot of time working on existing legacy codebases, and learning to use GenAI effectively with these projects is a great investment of your time.&lt;/p&gt;
&lt;p&gt;Of course, it’s a somewhat daunting topic to teach on because things are changing so rapidly, and best practices are still emerging. However, as I tried out different challenges, I was impressed with the breadth of scenarios in which LLMs can be of assistance in the coding environment.&lt;/p&gt;
&lt;p&gt;Let me quickly share a few of the key lessons I learned while preparing for this course and that I tried to teach during it.&lt;/p&gt;
&lt;h3 id="context-matters"&gt;Context matters&lt;/h3&gt;
&lt;p&gt;LLMs know a lot of stuff - they've read pretty much the entire internet. But they don't know about &lt;em&gt;your&lt;/em&gt; application, and your goals unless you tell they. So make use of the ability to drag in additional files to the context window, or use &lt;a href="https://code.visualstudio.com/docs/copilot/workspace-context"&gt;@workspace&lt;/a&gt; or &lt;a href="https://code.visualstudio.com/docs/copilot/copilot-chat-context#_let-copilot-find-the-right-files-automatically"&gt;#codebase&lt;/a&gt;. And you can give it more than code - make use of &lt;a href="https://code.visualstudio.com/docs/copilot/copilot-customization"&gt;custom instructions&lt;/a&gt; or drag in additional documentation that's relevant.&lt;/p&gt;
&lt;p&gt;Try to think of Copilot like a new starter at your company. Even if they are very intelligent, they will need a lot of guidance to understand exactly how you want them to work, the big picture of what the application does, and the reasons behind the tasks you are giving them.&lt;/p&gt;
&lt;h3 id="review-and-verify"&gt;Review and verify!&lt;/h3&gt;
&lt;p&gt;Of course I hope it goes without saying that you should review and verify the changes that Copilot makes. It can be tempting to get lazy especially if Copilot is on a roll, getting things right more often than not. But remember it is &lt;em&gt;your&lt;/em&gt; code, and Copilot is your &lt;em&gt;assistant&lt;/em&gt;, not your replacement, so take responsibility for what you commit and ship.&lt;/p&gt;
&lt;p&gt;Copilot can and does make mistakes, so checkpoint your code frequently so you can roll back, and ensure that you have great unit test coverage (which of course &lt;a href="https://docs.github.com/en/copilot/using-github-copilot/guides-on-using-github-copilot/writing-tests-with-github-copilot"&gt;Copilot can help you generate&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="experiment-with-prompts-and-models"&gt;Experiment with prompts and models&lt;/h3&gt;
&lt;p&gt;I love the fact that GitHub Copilot gives you access to a &lt;a href="https://docs.github.com/en/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-chat#ai-models-for-copilot-chat-1"&gt;variety of models from Open AI, Anthropic and Google&lt;/a&gt;. They are unfortunately confusingly named, making it hard to know which one you're &amp;quot;supposed&amp;quot; to use, but I'd recommend simply experimenting. Switch between models frequently and see which ones give you the best results.&lt;/p&gt;
&lt;p&gt;Also, if you're not getting the results you want, consider whether it might be your prompt that is the problem. Don't simply give up after the first attempt. Instead look for ways to be clearer about what you want, and provide additional context to guide it better.&lt;/p&gt;
&lt;h3 id="using-copilot-for-code-review"&gt;Using Copilot for code review&lt;/h3&gt;
&lt;p&gt;You can (and should) ask Copilot to review your code, and it will really help if you are explicit about the kinds of thing you are looking for. Do you want suggestions for modernizing, or improving performance? Do you want it to suggest architectural patterns? Not every suggestion it comes back with will be worth acting on, but in my experience it can often come up with some really good ideas.&lt;/p&gt;
&lt;h3 id="using-copilot-for-planning"&gt;Using Copilot for planning&lt;/h3&gt;
&lt;p&gt;If your application uses a legacy technology and you want to modernize it, you might be tempted to see if Copilot can do the whole thing in one shot. But often you'll find that is too ambitious (although maybe that will change with &lt;a href="https://github.blog/news-insights/product-news/github-copilot-the-agent-awakens/"&gt;agents&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;A better approach is to ask Copilot to come up with a plan. This will give you step by step instructions, and can often include things that you might have overlooked. You can then use Copilot again to help you implement each step of the plan. I was quite impressed with the detail of some of the migration plans it came up with while I was preparing for the course.&lt;/p&gt;
&lt;h3 id="using-llms-for-characterization-tests"&gt;Using LLMs for characterization tests&lt;/h3&gt;
&lt;p&gt;The goal of refactoring is of course to improve the structure of the code, without modifying its functionality. Refactoring is risky because it can introduce bugs, so having thorough unit test coverage is really valuable.&lt;/p&gt;
&lt;p&gt;One great thing about Copilot is its willingness to generate tests. A &amp;quot;&lt;a href="https://en.wikipedia.org/wiki/Characterization_test"&gt;characterization test&lt;/a&gt;&amp;quot; is a test that simply discovers how the code currently behaves, and puts tests in place to ensure that it doesn't change. Copilot can generate these tests, and this gives you a very quick and easy way to roll back or fix a refactoring that introduces regressions.&lt;/p&gt;
&lt;h3 id="using-llms-for-performance-improvements"&gt;Using LLMs for performance improvements&lt;/h3&gt;
&lt;p&gt;One of the topics I covered in the course was using GitHub Copilot for performance enhancements. You can of course just ask it to review your code and ask for performance-related suggestions, and sometimes it will come up with good ideas. But remember that it needs context - it won't automatically know which are the methods in your codebase that get called the most frequently, so supplying that information can help a lot.&lt;/p&gt;
&lt;p&gt;In one of my test scenarios, I wanted to validate the performance improvement by using &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt;. Copilot was very quickly able to generate me a new project for performance profiling, and even copied the old implementation into that project so I could get a side-by-side comparison of the improvement with minimal effort.&lt;/p&gt;
&lt;h3 id="summary"&gt;Summary&lt;/h3&gt;
&lt;p&gt;Although lots of GitHub Copilot demos focus on the amazing speed with which it can help you bootstrap a new application, that doesn't mean it can't help you on existing legacy codebases (which let's be honest, forms a large part of many of our daily jobs). If you're willing to put in a bit of time to
&lt;a href="http://www.pluralsight.com/courses/refactor-optimize-code-github-copilot"&gt;learn how to use these AI tools effectively&lt;/a&gt;, you may just find that working on technical debt filled codebases isn't quite as painful as it used to be!&lt;/p&gt;
</description>
  <author>test@example.com</author>
  <category>AI</category>
  <category>C#</category>
  <category>GitHub Copilot</category>
  <category>Pluralsight</category>
  <guid isPermaLink="false">https://markheath.net/post/2025/4/1/refactor-optimize-code-github-copilot</guid>
  <pubDate>Tue, 01 Apr 2025 00:00:00 GMT</pubDate>
</item></channel>
</rss>