The Future of Tech Blogging in the Age of AI
I've been blogging on this site for almost 20 years now, and the majority of my posts are simple coding tutorials, where I share what I've learned as I explore various new technologies (my journey on this blog has taken me through Silverlight, WPF, IronPython, Mercurial, LINQ, F#, Azure, and much more).
My process has always been quite simple. First, I work through a technical challenge and eventually get something working. And then, I write some instructions for how to do it.
Benefits of tech blogging
There are many benefits to sharing your progress like this:
- The process of putting it into writing helps solidify what you learned
- Despite this I still often forget how I achieved something, so my blog functions as a journal I can refer back to later
- You're supporting the wider developer community by sharing proven ways to get something working
- Thanks to "Cunningham's Law" ("the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."), your post may lead you to discover a better way to achieve the same goal, or a fatal flaw in your approach
- And gradually it builds your personal reputation and credibility, as eventually you'll build up visitors (although you may find that your most popular post of all time is on the one topic that you most certainly aren't an expert on!)
Are LLMs going to ruin it all?
But recently I've been wondering - are LLM's going to put an end to coding tutorial blogs like mine? Do they render it all pointless?
For starters, GitHub Copilot and Claude Code have already dramatically changed the way I go about exploring a new technique or technology. Instead of slogging through Bicep documentation, and endlessly debugging why my template didn't work, I now just ask the AI model to create one for me.
Refreshingly, I notice that it gets it wrong just as frequently as I do, but it doesn't get frustrated - it just keeps battling away until eventually it gets something working.
But now it feels like a hollow victory. Is there even any point writing a tutorial about it? If you can simply ask an agent to solve the problem, why would anyone need to read my tutorial? Are developers even going to bother visiting blogs like mine in the future?
And then there's the question of who writes the tutorial? Not only is the agent much quicker than me at solving the technical challenge, it's also significantly faster at writing the tutorial, and undeniably a better writer than me too. So maybe I should just let it write the article for me? But the internet is already full of AI-generated slop...
Should you let AI write your blog posts?
This is a deeply polarizing question. There's a number of possible options:
Level 1: Human only
You could insist on hand-writing everything yourself, with strictly no AI assistance. That's what you're reading right now (if you can't already tell from the decidedly mediocre writing style!)
This mirrors a big debate going on in the world of music production at the moment. If AI tools like Suno can generate an entire song from a single prompt, that sounds far more polished than anything I've ever managed to produce, then does that spell the end of real humans writing and recording songs? And should we fight against it or just embrace it as the future?
I think tech tutorials do fall into a different category to music though. If I want to learn how to achieve X with technology Y, I just want clear, concise and correct instructions - and I'm not overly bothered whether it came 100% from a human mind or not.
Having said that, we've already identified a key benefit of writing your own tutorials: it helps solidify what you've learned. Doing your own writing will also improve your own powers of communication. For those reasons alone I have no intention of delegating all my blog writing to LLMs.
Level 2: Human writes, AI refines
On the other hand, it seems churlish to refuse to take advantage of the benefits of LLM for proof reading, fact checking, and stylistic improvements. When I posted recently about does code quality still matter this is exactly what I did. I wrote the post myself, and then asked Claude Code to help me refine it, by critiquing my thoughts and providing counter-arguments.
To be honest, I ignored most of the feedback, but undoubtedly it improved the final article. This is the approach I've been taking with my Pluralsight course scripts - I first write the whole thing myself, and then ask an LLM to take me to task and tell me all the things I got wrong. (Although they're still ridiculously sycophantic and tell me it's the greatest thing they've ever read on the topic of lazy loading!)
Level 3: AI writes, human refines
But of course, my time is at a premium. A blog tutorial often takes me well over two hours to write. That's a big time investment for something that will likely barely be read by anyone.
And if all I'm producing is a tutorial, perhaps it would be better for me to get the LLM to do the leg-work of creating the structure and initial draft, and then I can edit afterwards, adapting the language to sound a bit more in my voice, and deleting some of the most egregious AI-speak.
That's exactly what I tried with a recent post on private endpoints. Claude Code not only created the Bicep and test application, but once it was done I got it to write up the instructions and even create a GitHub repo of sample code. The end result was far more thorough than I would have managed myself, and although I read the whole thing carefully and edited it a bit, I have to admit that most of the time I couldn't think of better ways to phrase each sentence, so a lot of it ended up unchanged.
That left a bad taste in my mouth to be honest. If I do that too often will I lose credibility and scare away readers? And yet I do feel like it was a genuinely valuable article that shows how to solve a problem that I'd been wanting to blog about for a long time.
Level 4: AI only
Of course, there is a level further, and now we are getting to the dark side. Could I ask Claude or ChatGPT to write me a blog post and just publish it without even reading it myself? I could instruct it to mimic my writing style, and it might even do a good enough job to go unnoticed? Maybe at some time in the future, Claude can dethrone my most popular article with one it wrote entirely itself.
To be honest, I have no interest in doing that at all - it undermines the purpose of this blog which is a way for me to share the things that I have learned. So I can assure you I have no intention of filling this site up with "slop" articles where the LLM has come up with the idea, written and tested the code, and published the article all without me having to be involved at all.
But interestingly, this approach might make sense for back-filling the documentation for my open-source project NAudio. Over the years I've written close to one hundred tutorials but there are still major gaps in the documentation.
I'm thinking of experimenting with asking Claude Code to write a short tutorial for every public class in the NAudio repo, and to then check its work by following the tutorial and making sure it really works.
I expect we're going to see an explosion of this approach to, and it could be a genuine positive for the open source community, where documentation is often lacking and outdated. If LLMs are to make a positive contribution to the world of coding tutorials, this is probably one of the best ways they can be utilized.
Why tech blogging still matters
If you're still with me at this point, well done - I know I've gone on too long. Even humans can be as long-winded as LLMs sometimes. But the process of writing down my thoughts on this issue has helped me gain some clarity, and made me realise that it doesn't necessarily matter whether or not I take an AI-free, AI-assisted or even a AI-first approach to my posts.
The value of sharing these coding tutorials is that the problems I'm solving are real-world problems. They are tasks that I genuinely needed to accomplish, and came with unique constraints and requirements that are specific to my circumstances. That gives them an authenticity that an AI can't fake. At best it can guess at what humans might want to achieve, and create a tutorials about that.
So when I'm reading your tech blog (which I hope you'll share a link to), I won't really care whether or not you used ChatGPT to create the sample code, or make you sound like a Pulitzer prize winner. I'll be interested because you're sharing your experience of how you solved a problem using the tools at your disposal.