In my last post, I talked about handling the FileCompleted event from an AudioFileInputNode in a UWP application. Since this event is raised from a background thread, it’s important to get back onto the Dispatcher thread before making any updates to the overall AudioGraph, or changing the UI.

In UWP, this can be done by calling RunAsync on the Dispatcher. (By the way if you’re in a ViewModel then you can get at the Dispatcher with the rather cumbersome CoreApplication.MainView.CoreWindow.Dispatcher static property)

private async void FileInputNodeOnFileCompleted(AudioFileInputNode sender, object args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        // code here runs on dispatcher thread
    });
}

Now this works fine, but there is a nice alternative you can use if you’re working with Reactive Extensions. Rx makes it really easy for us to say that we want to observe an event on the Dispatcher thread. We do of course need to turn the event into an Observable first, which can be done with a call to Observable.FromEventPattern (which admittedly is a little bit cumbersome itself), but then we can say we want to observe it on the current SynchronizationContext (which is the Dispatcher thread in UWP).

Here’s a simple example:

Observable.FromEventPattern<EventHandler, EventArgs>(
h => audioEngine.FileCompleted += h,
h => audioEngine.FileCompleted -= h)
.ObserveOn(SynchronizationContext.Current)
.Subscribe(f => OnPlaybackCompleted());

Now our event handler method OnPlaybackCompleted will run on the correct thread. Of course, on its own this may not seem a great improvement to using Dispatcher.RunAsync, but the benefit is that now it is an Observable, we can also make use of all the powerful composability that Rx brings.

0 Comments

I’ve been playing with the new UWP AudioGraph API recently and one of the things I wanted to try was fire and forget audio playback (for fire and forget with NAudio see my post here). This is where you want to play lots of individual pre-recorded sounds, such as in a game, and just want to trigger the beginning of playback in a single line of code and have cleanup handled for you automatically.

Let’s start by creating our AudioGraph, and an AudioDeviceOutputNode, and then we’ll actually start the graph, despite not having anything to play yet. The audio graph is quite happy to play silence.

private AudioGraph audioGraph;
private AudioDeviceOutputNode outputNode;

public MainPage()
{
    this.InitializeComponent();
    this.Loaded += OnLoaded;
}

private async void OnLoaded(object sender, RoutedEventArgs e)
{
    var result = await AudioGraph.CreateAsync(new AudioGraphSettings(AudioRenderCategory.Media));
    if (result.Status != AudioGraphCreationStatus.Success) return;
    audioGraph = result.Graph;
    var outputResult = await audioGraph.CreateDeviceOutputNodeAsync();
    if (outputResult.Status != AudioDeviceNodeCreationStatus.Success) return;
    outputNode = outputResult.DeviceOutputNode;
    audioGraph.Start();
}

Now we’ll create a helper file that will load a sound file bundled with the application, create an AudioFileInputNode from it, connect it to the output device node so it starts playing, and then subscribe to its FileCompleted event. In the FileCompleted event handler, we can remove the AudioFileInputNode from the graph and dispose it. Note that we have to do this on the UI thread which we can ensure by using Dispatcher.RunAsync.

private async Task PlaySound(string file)
{
    var bassFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri($"ms-appx:///Assets/{file}"));
    var fileInputNodeResult = await audioGraph.CreateFileInputNodeAsync(bassFile);
    if (fileInputNodeResult.Status != AudioFileNodeCreationStatus.Success) return;
    var fileInputNode = fileInputNodeResult.FileInputNode;
    fileInputNode.FileCompleted += FileInputNodeOnFileCompleted;

    fileInputNode.AddOutgoingConnection(outputNode);
}

private async void FileInputNodeOnFileCompleted(AudioFileInputNode sender, object args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        sender.RemoveOutgoingConnection(outputNode);
        sender.FileCompleted -= FileInputNodeOnFileCompleted;
        sender.Dispose();
    });
}

Now we can trigger sounds very easily:

private async void buttonBass_Click(object sender, RoutedEventArgs e)
{
    await PlaySound("bass.wav");
}

private async void buttonBrass_Click(object sender, RoutedEventArgs e)
{
    await PlaySound("brass.wav");
}

This design allows for multiple instances of the same sound to be playing at once. If you don’t need that, you might be able to come up with a more efficient model that just keeps a single instance of an AudioFileInputNode for each sound and reset its position back to zero when you need to replay it. But this technique seems to perform just fine in the simple tests I’ve run it with.

0 Comments

The way you access files on disk in UWP applications is through the StorageFile API. This allows you to read files but only from folders you have the rights to read from, which is pretty locked down in UWP. This is of course a good thing, but what if you want to ship a static data file with your application and access it as a StorageFile from inside the application.

It took me a while to work out how you can do this, so I thought I’d share the solution I found.

First of all, add your bundled data file into your project, for example in the Assets folder. Here, I’ve added in an MP3 file:

image

Now, make sure that the build action is set to Content. The Copy to Output Directory flag does not need to be set to copy.

image

Now, when we want to access this as a stored file, we use the StorageFile.GetFileFromApplicationUriAsync method and pass in a specially formatted Uri with the ms-appx:// protocol, like this:

var storageFile = await StorageFile.GetFileFromApplicationUriAsync(
    new Uri("ms-appx:///Assets/GuitarTest1.mp3"));

And that’s all there is to it. The StorageFile you get is of course read-only, but it can be passed to any API that expects a StorageFile.

Hope someone else finds this helpful, and do let me know in the comments if there is a better way to ship bundled content with your UWP applications.

0 Comments

Sometimes when you’re handling a message from a message queue, you realise that you can’t currently process it, but might be able to at some time in the future. What would be nice is to delay or defer processing of the message for a set amount of time.

Unfortunately, with brokered messages in  Azure Service Bus, there is no built-in feature to do this simply, but there are a few workarounds. In this post, we’ll look at four separate techniques: let the lock time out, sleep and abandon, defer the message, and resubmit the message.

Let the Lock Time Out

The simplest option is doing nothing. When you get your BrokeredMessage, don’t call Complete or Abandon. This will mean that the lock on the message will eventually time out, and it will become available for processing again once that happens. By default the lock duration for a message is 1 minute, but this can be configured for a queue by using the QueueDescription.LockDuration property.

The advantage is that this is a very simple way of deferring re-processing the message for about a minute. The main disadvantage is that the time is not so easy to control as the lock duration is a property of the queue, not the message being received.

In the following simple example, we create a queue with a lock duration of 30 seconds, send a message, but then never actually complete or abandon it in the handler. This results in us seeing the same message getting retried with an incrementing Delivery Count until eventually it is dead-lettered automatically on the 10th attempt.

string connectionString = // some connection string
const string queueName = "TestQueue";

// PART 1 - CREATE THE QUEUE
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

// ensure it is empty
if (namespaceManager.QueueExists(queueName))
{
    namespaceManager.DeleteQueue(queueName);
}
var queueDescription = new QueueDescription(queueName);
queueDescription.LockDuration = TimeSpan.FromSeconds(30);
namespaceManager.CreateQueue(queueDescription);

// PART 2 - SEND A MESSAGE
var body = "Hello World";
var message = new BrokeredMessage(body);
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
client.Send(message);

// PART 3 - RECEIVE MESSAGES
// Configure the callback options.
var options = new OnMessageOptions();
options.AutoComplete = false; // we will call complete ourself
options.AutoRenewTimeout = TimeSpan.FromMinutes(1); 

// Callback to handle received messages.
client.OnMessage(m =>
{
    // Process message from queue.
    Console.WriteLine("-----------------------------------");
    Console.WriteLine($"RX: {DateTime.UtcNow.TimeOfDay} - {m.MessageId} - '{m.GetBody<string>()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // Don't abandon, don't complete - let the lock timeout
    // m.Abandon();

}, options);

Sleep and Abandon

If we want greater control of how long we will wait before resubmitting the message, we can explicitly call abandon after sleeping for the required duration. Sadly there is no AbandonAfter method on brokered message. But it’s very easy to wait and then call Abandon. Here we wait for two minutes before abandoning the message:

client.OnMessage(m =>
{
    Console.WriteLine("-----------------------------------");
    Console.WriteLine($"RX: {DateTime.UtcNow.TimeOfDay} - {m.MessageId} - '{m.GetBody<string>()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // optional - sleep until we want to retry
    Thread.Sleep(TimeSpan.FromMinutes(2));

    Console.WriteLine("Abandoning...");
    m.Abandon();

}, options);

Interestingly, I thought I might need to periodically call RenewLock on the brokered message during the two minute sleep, but it appears that the Azure SDK OnMessage function is doing this automatically for us. The down-side of this approach is of course that our handler is now in charge of marking time, and so if we wanted to hold off for an hour or longer, then this would tie up resources in the handling process, and wouldn’t work if the computer running the handler were to fail. So this is not ideal.

Defer the Message

It turns out that BrokeredMessage has a Defer method whose name suggests it can do exactly what we want – put this message aside for processing later. But, we can’t specify how long we want to defer it for, and when you defer it, it will not be retrieved again by the OnMessage function we’ve been using in our demos.

So how do you get a deferred message back? Well, you must remember it’s sequence number, and then use a special overload of QueueClient.Receive that will retrieve a message by sequence number.

This ends up getting a little bit complicated as now we need to remember the sequence number somehow. What you could do is post another message to yourself, setting the ScheduledEnqueueTimeUtc to the appropriate time, and that message simply contains the sequence number of the deferred message. When you get that message you can call Receive passing in that sequence number and try to process the message again.

This approach does work, but as I said, it seems over-complicated, so let’s look at one final approach.

Resubmit Message

The final approach is simply to Complete the original message and resubmit a clone of that message scheduled to be handled at a set time in the future. The Clone method on BrokeredMessage makes this easy to do. Let’s look at an example:

client.OnMessage(m =>
{

    Console.WriteLine("----------------------------------------------------");
    Console.WriteLine($"RX: {m.MessageId} - '{m.GetBody<string>()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // Send a clone with a deferred wait of 5 seconds
    var clone = m.Clone();
    clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(5);
    client.Send(clone);

    // Remove original message from queue.
    m.Complete();
}, options);

Here we simply clone the original message, set up the scheduled enqueue time, send the clone and complete the original. Are there any downsides here?

Well, it’s a shame that sending the clone and completing the original are not an atomic operation, so there is a very slim chance of us seeing the original again should the handling process crash at just the wrong moment.

And the other issue is that DeliveryCount on the clone will always be 1, because this is a brand new message. So we could infinitely resubmit and never get round to dead-lettering this message.

Fortunately, that can be fixed by adding our own resubmit count as a property of the message:

client.OnMessage(m =>
{
    int resubmitCount = m.Properties.ContainsKey("ResubmitCount") ?  (int)m.Properties["ResubmitCount"] : 0;

    Console.WriteLine("----------------------------------------------------");
    Console.WriteLine($"RX: {m.MessageId} - '{m.GetBody<string>()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}, ResubmitCount: {resubmitCount}");

    if (resubmitCount > 5)
    {
        Console.WriteLine("DEAD-LETTERING");
        m.DeadLetter("Too many retries", $"ResubmitCount is {resubmitCount}");
    }
    else
    {
        // Send a clone with a deferred wait of 5 seconds
        var clone = m.Clone();
        clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(5);
        clone.Properties["ResubmitCount"] = resubmitCount + 1;
        client.Send(clone);

        // Remove message from queue.
        m.Complete();
    }
}, options);

Summary

It is a shame that there isn’t an overload of Abandon that specifies a time to wait before re-attempting processing of the message. But there are several ways you can work around this limitation as we’ve seen in this post. Of course you may know of a better way to tackle this problem. If so, please let me know in the comments.

0 Comments

Here’s a code sample I’ve been meaning to share on my blog for years. NAudio already has a built-in SignalGenerator class which can generate sine waves as well as various other waveforms. But what if you want to implement “portamento” to glide smoothly between frequencies?

One simple way to do this is to make use of a “wavetable”. Basically we store one cycle of a sine wave in a memory buffer, and then to work out what the next sample to be played is, we move forward a certain number of slots in that buffer and read out the value. If we go past the end of the buffer we simply start again.

If we call the current position in the waveform the “phase” and the number of steps in the wavetable we must move forward the “phase step”, then when the target frequency is changed, instead of immediately recalculating a new “phase step”, we slowly change the current phase step until it reaches the desired phase step.

It’s quite a simple technique, and the great thing is that it works for any waveform, so you could quite easily do the same with square or sawtooth waveforms.

Here’s a really basic implementation of this, which allows you to customise the portamento time. I meant for this setting to be in seconds, but I think I’ve got it slightly wrong as when you set it to 1.0 it seems to take longer than a second to reach the target frequency.

class SineWaveProvider : ISampleProvider
{
    private float[] waveTable;
    private double phase;
    private double currentPhaseStep;
    private double targetPhaseStep;
    private double frequency;
    private double phaseStepDelta;
    private bool seekFreq;

    public SineWaveProvider(int sampleRate = 44100)
    {
        WaveFormat = WaveFormat.CreateIeeeFloatWaveFormat(sampleRate, 1);
        waveTable = new float[sampleRate];
        for (int index = 0; index < sampleRate; ++index)
            waveTable[index] = (float)Math.Sin(2 * Math.PI * (double)index / sampleRate);
        // For sawtooth instead of sine: waveTable[index] = (float)index / sampleRate;
        Frequency = 1000f;
        Volume = 0.25f;
        PortamentoTime = 0.2; // thought this was in seconds, but glide seems to take a bit longer
    }

    public double PortamentoTime { get; set; }

    public double Frequency
    {
        get
        {
            return frequency;
        }
        set
        {
            frequency = value;
            seekFreq = true;
        }
    }

    public float Volume { get; set; }

    public WaveFormat WaveFormat { get; private set; }

    public int Read(float[] buffer, int offset, int count)
    {
        if (seekFreq) // process frequency change only once per call to Read
        {
            targetPhaseStep = waveTable.Length * (frequency / WaveFormat.SampleRate);

            phaseStepDelta = (targetPhaseStep - currentPhaseStep) / (WaveFormat.SampleRate * PortamentoTime);
            seekFreq = false;
        }
        var vol = Volume; // process volume change only once per call to Read
        for (int n = 0; n < count; ++n)
        {
            int waveTableIndex = (int)phase % waveTable.Length;
            buffer[n + offset] = this.waveTable[waveTableIndex] * vol;
            phase += currentPhaseStep;
            if (this.phase > (double)this.waveTable.Length)
                this.phase -= (double)this.waveTable.Length;
            if (currentPhaseStep != targetPhaseStep)
            {
                currentPhaseStep += phaseStepDelta;
                if (phaseStepDelta > 0.0 && currentPhaseStep > targetPhaseStep)
                    currentPhaseStep = targetPhaseStep;
                else if (phaseStepDelta < 0.0 && currentPhaseStep < targetPhaseStep)
                    currentPhaseStep = targetPhaseStep;
            }
        }
        return count;
    }
}

I’ve packaged this up into a very simple WPF application available on GitHub for you to try out. One word of warning, sine waves can be ear piercing so start off with the volume set very low before trying this out.

image

Please note, my wavetable code is extremely rudimentary. If you want to go into the science behind doing this properly, make sure to check out Nigel Redmon’s excellent wavetable series at Ear Level Engineering. This involves creating multiple wavetables to cover different frequency ranges and using linear interpolation (I just truncate the phase to the nearest array entry).

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.