Posted in:

I'm pleased to announce my latest Pluralsight course, Refactor and Optimize Code with GitHub Copilot, has just gone live. It will be part of a series of courses covering various aspects of using GitHub Copilot, and my course focuses specifically on using Copilot to refactor, modernize and optimize code.

This is an important topic as many of us are spending a lot of time working on existing legacy codebases, and learning to use GenAI effectively with these projects is a great investment of your time.

Of course, it’s a somewhat daunting topic to teach on because things are changing so rapidly, and best practices are still emerging. However, as I tried out different challenges, I was impressed with the breadth of scenarios in which LLMs can be of assistance in the coding environment.

Let me quickly share a few of the key lessons I learned while preparing for this course and that I tried to teach during it.

Context matters

LLMs know a lot of stuff - they've read pretty much the entire internet. But they don't know about your application, and your goals unless you tell they. So make use of the ability to drag in additional files to the context window, or use @workspace or #codebase. And you can give it more than code - make use of custom instructions or drag in additional documentation that's relevant.

Try to think of Copilot like a new starter at your company. Even if they are very intelligent, they will need a lot of guidance to understand exactly how you want them to work, the big picture of what the application does, and the reasons behind the tasks you are giving them.

Review and verify!

Of course I hope it goes without saying that you should review and verify the changes that Copilot makes. It can be tempting to get lazy especially if Copilot is on a roll, getting things right more often than not. But remember it is your code, and Copilot is your assistant, not your replacement, so take responsibility for what you commit and ship.

Copilot can and does make mistakes, so checkpoint your code frequently so you can roll back, and ensure that you have great unit test coverage (which of course Copilot can help you generate).

Experiment with prompts and models

I love the fact that GitHub Copilot gives you access to a variety of models from Open AI, Anthropic and Google. They are unfortunately confusingly named, making it hard to know which one you're "supposed" to use, but I'd recommend simply experimenting. Switch between models frequently and see which ones give you the best results.

Also, if you're not getting the results you want, consider whether it might be your prompt that is the problem. Don't simply give up after the first attempt. Instead look for ways to be clearer about what you want, and provide additional context to guide it better.

Using Copilot for code review

You can (and should) ask Copilot to review your code, and it will really help if you are explicit about the kinds of thing you are looking for. Do you want suggestions for modernizing, or improving performance? Do you want it to suggest architectural patterns? Not every suggestion it comes back with will be worth acting on, but in my experience it can often come up with some really good ideas.

Using Copilot for planning

If your application uses a legacy technology and you want to modernize it, you might be tempted to see if Copilot can do the whole thing in one shot. But often you'll find that is too ambitious (although maybe that will change with agents).

A better approach is to ask Copilot to come up with a plan. This will give you step by step instructions, and can often include things that you might have overlooked. You can then use Copilot again to help you implement each step of the plan. I was quite impressed with the detail of some of the migration plans it came up with while I was preparing for the course.

Using LLMs for characterization tests

The goal of refactoring is of course to improve the structure of the code, without modifying its functionality. Refactoring is risky because it can introduce bugs, so having thorough unit test coverage is really valuable.

One great thing about Copilot is its willingness to generate tests. A "characterization test" is a test that simply discovers how the code currently behaves, and puts tests in place to ensure that it doesn't change. Copilot can generate these tests, and this gives you a very quick and easy way to roll back or fix a refactoring that introduces regressions.

Using LLMs for performance improvements

One of the topics I covered in the course was using GitHub Copilot for performance enhancements. You can of course just ask it to review your code and ask for performance-related suggestions, and sometimes it will come up with good ideas. But remember that it needs context - it won't automatically know which are the methods in your codebase that get called the most frequently, so supplying that information can help a lot.

In one of my test scenarios, I wanted to validate the performance improvement by using BenchmarkDotNet. Copilot was very quickly able to generate me a new project for performance profiling, and even copied the old implementation into that project so I could get a side-by-side comparison of the improvement with minimal effort.

Summary

Although lots of GitHub Copilot demos focus on the amazing speed with which it can help you bootstrap a new application, that doesn't mean it can't help you on existing legacy codebases (which let's be honest, forms a large part of many of our daily jobs). If you're willing to put in a bit of time to learn how to use these AI tools effectively, you may just find that working on technical debt filled codebases isn't quite as painful as it used to be!