0 Comments

Thanks to the Azure Functions CLI, it’s possible to debug your Azure Functions running locally, which is a great way to troubleshoot your functions before sending them live.

But did you know it’s also possible to debug them remotely? This works because Azure App Service, which Azure Functions is built on top of, already has remote debugging support built-in.

Setting Up

First of all, a few things you’ll want installed. You need Visual Studio 2015 Update 3. Currently it seems VS 2017 is not supported, but I suspect that will only be a matter of time. Also, install the Visual Studio Tools for Azure Functions (not 100% sure this is necessary, but useful anyway if you’re going to be using Azure Functions with Visual Studio), and make sure you have the Cloud Explorer extension installed (I think you get this automatically when you install the Azure SDK for VS).

Finally, there are a couple of debug settings you need to change. In the Debug | Options dialog, deselect “Enable Just My Code” and “Require source files to exactly match the original version”

image

Attaching the Debugger

Now in the Cloud Explorer window, navigate to the function app that you want to debug, and in the Files node, find the source code for the function you want to debug. Please note that at the moment, I understand that you can only remote debug C# functions. Hopefully this will change in the near future. I haven’t tried remote debugging other languages myself.

image

Now we can double-click on the run.csx file to download it from Azure and set up your breakpoints.

image

Now we need to attach the debugger. This is done in the Cloud Explorerby right-clicking on the app service and selecting “Attach Debugger

image

This will take a while, and will open a web-page that you don’t need (the home-page for your app service), but the important thing is that the debug symbols get loaded. You should see a bunch of dialogs saying that the symbols are loading.

If all is well, the breakpoint you set will appear as a solid red circle indicating that it can be hit. If not (which sadly seems to happen quite regularly), I have found that stopping and restarting the app service before attaching the debugger usually helps.

Debugging your Function

Finally, you need to trigger your function with a HTTP Request or posting a message into a queue etc. When the breakpoint hits, you can do all the usual stuff you’d expect in the debugger, including hovering over to see the values of variables, stepping through the code, and changing the values of variables with the Immediate window.

The whole remote debugging function experience is a little on the flakey side at the moment. Hopefully as the tooling matures it will become more reliable. In many cases you shouldn’t need to use this at all, but it’s nice to know that should the need arise you can still remotely attach to the servers your “serverless” code is running on and find out exactly what is going on.

0 Comments

A while go I blogged about how you could use Azure Functions to handle payment webhooksfrom a third party payment provider. The example I showed how I could receive a webhook from Paddle for sales for my Skype Voice Changer app, and post a message to a queue.

But that was just the first step in an order processing pipeline. The next step was for that message to cause a license file to get created and stored in blob storage. And then the step after that was to email that license out to the customer. And that can be done quite easily with the Azure Functions SendGrid binding.

Here’s how I set up my license file emailing Azure function.

First of all, every Azure Function needs a trigger. My function was going to be triggered by a license file appearing in blob storage.

So in my function.json file, the following blobTrigger binding was set up – looking at the licenses container in my blob storage connection (whose connection string can be found in the App Setting with the name “MyStorageAccount”) for files with the .lic extension.

{
  "name": "myBlob",
  "type": "blobTrigger",
  "direction": "in",
  "path": "licenses/{filename}.lic",
  "connection": "MyStorageAccount"
},

My function also needed to be able to send emails. So I added a SendGrid output binding. This binding can have lots of things configured about it, but I chose only to specify the SendGrid API Key. Note that the key itself doesn’t go in the function.json file. Instead you put the name of an App Setting that contains the key, just like you do with connection strings. This helps keep secrets from getting stored in source control. To get your own SendGrid API key you’ll need to sign up for a SendGrid account and then you can create a key from your account settings page on their site.

{
  "type": "sendGrid",
  "name": "message",
  "apiKey": "SendGridApiKey",
  "direction": "out"
},

My function also needed to know who to send the email to. That information was available in Table Storage, as I’d updated my webhook processing function to also write details of the incoming order into an Azure Table Storage table called orders. So I used a Table Storage input binding to automatically look up the matching row in the table for the license file that had triggered the function. I could do that because the license file name was simply the order number (e.g. 1247518.lic), so I could use {filename} as the rowKey for the table storage binding:

{
  "type": "table",
  "name": "ordersRow",
  "tableName": "orders",
  "partitionKey": "Orders",
  "rowKey": "{filename}",
  "take": 50,
  "connection": "MyStorageAccount",
  "direction": "in"
}

With these bindings in place, I could define my function with a string parameter to take the contents of the license file in blob storage (which was a text file), an Order parameter to contain the matching row in Table Storage, and an out Mail parameter which can be used to send the SendGrid mail. You can also see that we need to reference the SendGrid assembly with a #r statement, and open the SendGrid.Helpers.Mail namespace.

#r "SendGrid"

using SendGrid.Helpers.Mail;

public class Order
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    
    public string OrderId { get; set;}
    public string ProductId { get; set;}
    public string Email { get; set;}
    public decimal Price { get; set; }
}

public static void Run(string myBlob, string filename, 
    Order ordersRow, TraceWriter log, out Mail message)
{
   ...
}

Now we’re ready to send the email. We create a Mail object, and use the Personalization class to specify who it is sent to. Adding an attachment requires creating an Attachment object and  using Base 64 encoding to add the contents of the license file. Then we use the Content class to set the content of the message, which can be HTML if we want. And finally we need to specify a subject and sender. Many of these settings can also be set up in the binding if you don’t want to specify them in code every time.

var email = ordersRow.Email;
log.Info($"Got order from {email}\n License file Name:{filename}");

message = new Mail();
var personalization = new Personalization();
personalization.AddTo(new Email(email));
message.AddPersonalization(personalization);

Attachment attachment = new Attachment();
var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(myBlob);
attachment.Content =  System.Convert.ToBase64String(plainTextBytes);
attachment.Type = "text/plain";
attachment.Filename = "license.lic";
attachment.Disposition = "attachment";
attachment.ContentId = "License File";
message.AddAttachment(attachment);

var messageContent = new Content("text/html", "Your license file is attached");
message.AddContent(messageContent);
message.Subject = "Thanks for your order";
message.From = new Email("mark@soundcode.org");

And that’s all there is to it. A very easy way to send emails, and the table storage binding makes it trivial to lookup the related order information for the license we want to send.

Once again Azure Functions makes it trivially easy to set up backend processing tasks like this for your applications.

0 Comments

It’s been way too long since I last released a version of NAudio, but finally just before the end of last year I managed to release NAudio 1.8.0 which contains lots of new features and bugfixes. It’s available on NuGet which is the best way to get hold of it.

The release notes on GitHub contain a fairly detailed breakdown of what’s new so I won’t repeat that all here. There are a few Windows 10 / UWP related changes, but I’m still personally preferring AudioGraph as my first choice if writing audio apps on UWP, so NAudio remains primarily focused on regular (classic?) Windows application development.

I’ve added several extension methods for ISampleProvider such as FollowedBy, Skip, ToMono, Take and ToStereo, which allows a more fluent interface style of programming.

There have been a number of great community contributions. One which many people have wanted for a long time is a version of MediaFoundationReader that supports a Stream input, and this is now available with StreamMediaFoundationReader.

As I said in my end of year review, I don’t have the same amount of time these days to progress the NAudio project, so it is mostly getting bugfixes and minor new features, but I do try to stay on top of answering questions on GitHub and StackOverflow, and it’s great to see that the library continues to prove useful to many people.

Thanks to everyone who has contributed and supported the development of NAudio so far.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments

It’s that time of year again where I reflect on what I’ve accomplished in the last year and what I want to prioritise in the next.

Launched LINQ and UWP Pluralsight Courses

This year I achieved the milestone of having 10 published Pluralsight courses after releasing two new courses. The first of these was on More Effective LINQ, a topic I love teaching about, as I’m a huge fan of LINQ.

The second is a much more niche topic – I looked in depth at the new AudioGraph API on the Universal Windows Platform. It’s great to see that finally Windows has a nice easy to use audio programming API. Whether the whole UWP platform really takes off remains to be seen, but I enjoyed the chance to get familiar with it as part of the preparation for this course.

Going Deeper with Azure

As I said in last year’s review, I am now a “cloud architect”, and you may have noticed a lot more blogging about Azure in the last year. I certainly still have a lot to learn about building cloud deployed systems that are secure and resilient, but after a pretty intense year of learning, I don’t feel quite so out of my depth any more.

The product I’ve been building last year is now live and in use by our first customer, and hopefully several more by the end of the year. You can expect plenty more Azure related blogging in the coming year.

Language Learning

I also am starting to feel a lot more confident using F#, which has become one of my most blogged about topics. I even dared to submit an entry to this year’s F# advent calendar. There’s still some parts of F# I need to get more confident in such as type providers and asynchronous coding, so hopefully I’ll make further progress next year. I’ve really been enjoying the early releases of Isaac Abraham’s Learn F# book recently.

Also, in a non-programming related goal, I managed to complete my Duolingo French tree after maintaining my “streak” for the entire year. I can even just about follow French dev podcasts like Visual Studio Talk Show now.

image

Abandoning Windows Phone

More by coincidence than choice, my first ever smartphone was a Windows Phone, and I’ve faithfully stuck with the much maligned OS for several years. But this Christmas I needed a new phone, and there was literally no option available in the price range I was looking to spend (£150-200). So I took the plunge and got my first Android phone, and finally it means I can get to use many of the apps that I was previously locked out of. Who knows, it may be the catalyst for me to finally learn some Xamarin, which I have mostly ignored until now.

Retiring Skype Voice Changer

My Creating and Selling a Digital Product Pluralsight course tells the story of how I created Skype Voice Changer and sold it online. It was never a runaway success, partly because a change to how Skype works meant that over the last year, fewer and fewer people are able to successfully use it. I finally took the decision to retire the product by withdrawing it from sale a couple of months ago, having made a few hundred sales over the course of two years.

Although it’s disappointing to have to shut this down (especially since the traffic to the site continues to grow), it was a good learning lesson, and I also think that supporting users with problems had become a time-wasting distraction to other things that were more deserving of my time. So I think it’s good I’m finally done with this project.

Maintaining NAudio

Although audio programming and NAudio in particular have been something of a specialist subject for me, I have had to accept that audio programming is no longer a significant part of my day job. So although I’ve worked hard this year to answer hundreds of questions, accept pull requests (another 16 this year) and make improvements (I made 40 commits this year), the truth is that the project is really in maintenance mode at the moment. I did however release a long overdue new version (1.8) just before the year end which I intend to blog about soon.

What’s on the Radar for 2017?

I’ve currently got a new Pluralsight course in the works (hoping to announce something in the not too distant future), and I expect my focus will be increasingly on cloud related topics such as Azure or serverless architecture in the next year.

In terms of new technologies I want to learn, Docker stands out as the one that continues to gain momentum and it looks like it is well on the way to becoming a solid option for Windows and .NET developers.

So finally, let me say Happy New Year to you, and if any of you happen to be at the NDC conference in London in January, I’ll be there on the Friday (and at PubConf afterwards) so do say hello.

0 Comments

So we came to the end of the Advent of Code puzzles, and I was hoping that it would be a quick one for Christmas Day, but sadly it did not prove the case.

The problem built on the assembly language of day 23, so much of the code could be reused, but there was a new instruction “output” that emitted a signal, and the hard-coded loop optimisation I created for day 23 needed to be re-created for today’s input. So the code as I had it clearly was not in the best state to be reused.

A bigger issue was the challenge itself. We had to find the lowest starting value for the “a” register that caused the program to output a sequence 0,1,0,1 infinitely. This obviously raised some performance issues – how many values of “a” will we need to test and how long will it take to test each one? How will we know if the program is going to emit 0,1,0,1 infinitely?

So my first inclination was to try to manually decompile the program into C#. Commands like inc, dec, cpy obviously were easy to turn into assignment statements, but the jnz commands were harder to get my head around. They turned into if or break statements if we jumped forwards, and while loops to let us jump backwards.

The idea was that by decompiling into a language I am more familiar with, I would understand what the program did and what input would be required to generate the desired output sequence. However, at some point in the decompilation process I realised I’d made a mistake, so I abandoned the approach. Later however, I discovered that some people on the reddit group had indeed successfully decompiled their program, and it’s worth looking at their solutions.

So I fell back to testing every possible value of ‘a’. I decided that if I got a sequence of length 20, that would probably be good enough. 

I started by making the refactoring I should have done on day 23, and included the program, the instruction index and now the list of values emitted by the output command in the state object along with just the registers. This gives us a more functional solution instead of mutating the program with the toggle command.

type State = { Registers:Map<string,int>; Program:Instruction[]; Index:int; Output:int list}

Now to be honest I still didn’t implement all the refactoring this problem deserved. When I found a solution, I just used failwith to break out and relied on the fact that my console output would contain the last value of register ‘a’ that I’d tried. So my ugly implementation of the new out instruction looks like this:

let out source state =
    let emit = (read state.Registers source)
    let expect = match state.Output with
                    | [] -> 0
                    | head::tail -> if head = 0 then 1 else 0
    let next = if emit=expect then state.Index + 1 else 1001
    if List.length state.Output = 20 then
        printfn "SUCCESS"
        failwith "get out of here" 
    else                
        { state with Output = emit :: state.Output; Index = next }

The apply function’s signature becomes simpler as a result of our State type. It just takes the input state and returns the output state. And it includes a hard-coded optimization of a multiplication loop just like we had on day 23.

let apply (st:State) : State = 
    if st.Index = 3 then // multiplication loops
        { st with 
            Registers= (st.Registers.Add("d",st.Registers.["d"] + (abs st.Registers.["c"]) * (abs st.Registers.["b"]))); 
            Index = st.Index + 5 }
    else
        match st.Program.[st.Index] with
        | Copy (source, register) -> { st with Registers = copy source register st.Registers; Index =st.Index + 1 } 
        | Inc register -> { st with Registers = inc register st.Registers; Index = st.Index + 1 }
        | Dec register -> { st with Registers = dec register st.Registers; Index = st.Index + 1 }
        | Jnz (source,offset) -> { st with Index = st.Index + jnz source offset st.Registers }
        | Toggle source -> failwith "not supported" // toggle st state.Registers prog index
        | Out source -> out source st

Now we could just try to solve with an incrementing start value for register a:

for a in [0..10000] do
    printfn "Trying %d" a
    let x = solve { Registers = startState.Add("a",a); Program=program; Index = 0; Output = [] } 
    printfn "Fail"

This got me my answer pretty quickly, and with hindsight I wish I’d started with this approach rather than going for decompilation. I did at least get my two stars for day 25 before the end of the day.

But the code I ended up with to solve day 25 is a horrible mess. It needs a real clean-up that I don’t have time for. So I’m hoping I won’t end up needing to reuse this code for Advent of Code 2017! This is a good example of the “technical debt” problem I created a Pluralsight course about. We have code that does actually work, but is in a far from ideal state. Should we spend time refactoring it and improving its design? If we don’t then next time we need to work on the code we will find ourselves struggling and going much slower. But if we do, we’ve potentially wasted our time if that code turns out not to need any future modifications. It takes good judgment to know when you need to keep refactoring, and when its OK to just lay your code to one side and move onto the next task.