AI Coding - What is it good for?
If you're like me, 2024 was the year in which using LLM for code generation went from a curiosity that I occasionally tried out when all other avenues failed, to something that has become an essential part of day-to-day development work.
Availability and pricing
When the year began the availability of these tools was mostly limited to paying customers, but with the majority of LLMs now offering completely free tiers that are perfectly capable of generating code, and now tools like GitHub Copilot (also now offering a free tier) that integrate directly with your IDE, it seems inevitable that in 2025 it will become an integral part of the way most developers write code.
Trust issues
Another barrier to adoption was whether it was safe to use the code it generates. How can I trust that it has not been trained on does not use a license that is not-compliant? How can I be sure that there aren't subtle security vulnerabilities? And assuming the code compiles, does it actually work correctly?
It feels like some good progress has been made on these fronts this year. Tools like GitHub Copilot can scan the generated code to ensure it complies with open source licenses and avoids infringing on copyrights. And many companies are already integrating vulnerability scanning tools directly into their development process.
As for the correctness of the generated code, many developers I spoke to at the start of the year expressed the intention that they would manually code review all LLM-generated code to scrutinize it for errors. However, the speed and simplicity of code generation means that already we are seeing people skip this step and jump directly to compiling and running the code.
Learn to use it effectively
There are of course still many coding tasks that LLMs are not good at. Either they fail to understand the requirements, or they are unaware of the intricacies of the APIs needed to correctly fulfil the task. Or you've simply asked it to do something too complicated for a single prompt.
I think one of the most valuable skills for developers to learn in 2025 will be how to more effectively prompt LLMs to help solving coding tasks. It's essential to not only have a good grasp of what they are capable of, but how to guide them to do exactly what you want. This often includes going back several times and asking it to fix its previous answer rather than simply giving up if the initially generated code is not working.
I'll briefly share a few simple examples of the places I've found LLMs have accelerated my development in the last year.
Help with second-language
C# is my primary coding language, and so I often don't feel the need to ask LLMs for help with that. But I often find myself needing to use various scripting languages that I am less fluent in.
For example, recently I needed to write a PowerShell script that took an MP4 file, ran it through FFMPEG to convert it to a different format, and then calculated the SHA256 hash of the output file. It's not super-complicated, but would normally require me to do a bunch of web searches to remember the various bits of PowerShell syntax, FFMPEG command line flags and powershell hashing capabilities needed to complete that task. ChatGPT was able to correctly generate the script on the first attempt.
Another example is SQL. I can write simple queries, but if I want a quick script to check if a row exists and create it if it doesn't, LLMs can write it more quickly than I can. It recently surprised my by managing to perfectly generate the SQL to create all of the ASP.NET Core Identity tables.
Automate everything
I'm also often spinning up resources in Azure for test purposes. Although I am very capable of using the Azure CLI to script out creating App Service Plans, SQL database etc, it still takes longer to write out by hand than doing things in the portal. But now I can just ask an LLM to make me a script using the Azure CLI to automate the creation of the resources I need in a fraction of the time. I'm hoping that this will mean I can start to automate almost everything, rather than being lazy and doing some things manually.
Small, focused tasks
One of the most crucial skills a software developer (or architect) needs is the ability to use "divide and conquer". That is, to take a large problem and break it up into manageable chunks that can be solved individually. And it turns out that this skill is also very useful for getting good results from LLMs. For example, I might first an LLM first to write the code that connects me using managed identities to an Azure Blob Storage account, and then a separate request to implement a chunked upload of a large file.
I've found that by breaking things up into smaller tasks, the generated code has a higher chance of being right first time, and if it isn't, its a lot easier to fix up with follow-on requests.
Code generation in 2025
It should be obvious to anyone who is paying attention that LLM generated code is not a fad. We're seeing rapid improvements not only to the quality of the code generated, but to the integration with tooling, and if you throw in the concept of "agents" to the mix that are able to automate the process of tackling much larger and more complex projects, this coming year could see even more dramatic shifts to the way we approach software development.
Let me know in the comments what the most impressive thing you've used LLMs for is this year, and how you hope to use it in the coming year. Have you paid for any of the AI coding tools yet?