NAudio and the PlaybackStopped Problem
The IWavePlayer
interface in NAudio features an event called PlaybackStopped
. The original idea behind this event was simple: you start playing a file (e.g. pass a WaveFileReader
into WaveOut.Init
and then call Play
), and when it reaches its end you are informed, allowing you to dispose the input stream and playback device if you wanted to, or maybe start playing something else.
The reality was not quite so simple, and I have ended up trying to discourage users of the NAudio library from making use of the PlaybackStopped
event. In this post I will explain some of the reasons behind that and explain how I hope to restore the usefulness of PlaybackStopped
in future versions of NAudio.
The Never-Ending Stream Problem
The way that each implementor of IWavePlayer
determines whether it should automatically stop playback is when the Read
method on the source IWaveProvider
returns 0 (In fact Read should always return the count parameter unless the end of the stream has been reached).
However, there are some WaveStreams
/ IWaveProviders
in NAudio that never stop returning audio. This isn’t a bug – it is quite normal behaviour for some scenarios. Perhaps BufferedWaveProvider
is the best example – it will return a zero-filled buffer if there is not enough queued data for playback. This is useful for streaming from the network where data might not be arriving fast enough but you don’t want playback to stop. Similarly WaveChannel32
has a property called PadWithZeroes
allowing you to turn this behaviour on or off, which can be useful for scenarios where you are feeding into a mixer.
The Threading Problem
This is one of the trickiest problems surrounding the PlaybackStopped
event. If you are using WaveOut
, the chances are you are using windowed callbacks, which means that the PlaybackStopped
event is guaranteed to fire on the GUI thread. Since only one thread makes calls to the waveOut
APIs, it is also completely safe for the event handler to make other calls into waveOut
, such as calling Dispose
, or starting a new playback.
However, with DirectSound
and WASAPI, the PlaybackStopped
event is fired from a background thread we create. Even more problematic are WaveOut
function callbacks and ASIO, where the event is raised from a thread from deep within the OS / soundcard device driver. If you make any calls back into the driver in the handler for the PlaybackStopped
event you run the risk of deadlocks or errors. You also don’t want to give the user a chance to do anything that might take a lot of time in that context.
This problem almost caused me to remove PlaybackStopped
from the IWavePlayer
interface altogether. But I have decided to see if I can give it one last lease of life by using the .NET SynchronizationContext
class. The SynchronizationContext
class allows us easily in both WPF and WinForms to invoke the PlaybackStopped
event on the GUI thread. This greatly reduces the chance of something you do in the handler causing a problem.
The Has it Really Finished Problem
The final problem with PlaybackStopped
is another tricky one. How do you know when playback has stopped? You know when you have reached the end of the source file, since the Read
method returns 0. And you know when you have given the last block of audio to the soundcard. But the audio may not have finished playing yet, particularly if you are working at high latency. The old WaveOut
implementation in particular was guilty of raising PlaybackStopped
too early.
One workaround requiring no changes to the existing IWavePlayer
implementations would be to create a LeadOutWaveProvider
class, deriving from IWaveProvider
. This would do nothing more than append a specified amount of silence onto the end of your source stream, ensuring that it plays completely. Here’s a quick example of how that could be implemented:
private int silencePos = 0;
private int silenceBytes = 8000;
public int Read(byte[] buffer, int offset, int count)
{
int bytesRead = source.Read(buffer,offset,count);
if (bytesRead < count)
{
int silenceToAdd = Math.Min(silenceBytes – silencePos, count – bytesRead);
Array.Zero(buffer,offset+bytesRead,silenceToAdd);
bytesRead += silenceToAdd;
silencePos += silenceToAdd;
}
return bytesRead;
}
Goals for NAudio 1.5
Fixing all the problems with PlaybackStopped
may not be fully possible in the next version, but my goals are as follows:
- Every implementor of
IWavePlayer
should raisePlaybackStopped
(this is done) PlaybackStopped
should be automatically invoked on the GUI thread if at all possible usingSynchronizationContext
(this is done)- It should be safe to call the
Dispose
ofIWavePlayer
in thePlaybackStopped
handler (currently testing and bugfixing) PlaybackStopped
should not be raised until all audio from the source stream has finished playing (done forWaveOut
– we now raise it when there are no queued buffers, will need to code review other classes to decide if this is the case).- Keep the number of never-ending streams/providers in NAudio to a minimum and try to make it very clear which ones have this behaviour.
Comments
Hi, I really like this library. I ended up usnig the PlaybackStopped event to restart after dropOuts. I'd appreciate a comment on a better way to handle dropOuts in general. I was getting them with Wasapi playback in event mode from 50ms - 200ms. In non-event mode I get fewer.
crokusekvoid player_PlaybackStopped(object sender, StoppedEventArgs e)
{
IWavePlayer player = sender as IWavePlayer;
if (_waveStream.Position != 0 &&
_waveStream.Position < _waveStream.Length-1)
{
_dropOutsSinceLastPlay++;
Dbg.WriteLine("Restarting after dropout #" + _dropOutsSinceLastPlay + "...");
player.Play();
return;
}
player.Dispose();
}
UPDATE: If I use a latency of 200mS and add similar LeadOUtSamples of 200mS, it actually work perfectly :)
Frank JensenHi there. I like it too :) Really nice work. I started out with winlib, but that gave me a lot of challenges.
I am having similar problems. I play real short audio snippets, Sometimes less than 0,5 seconds in length. Setting the latency to eg 200, cuts off the message. One thing is, that PlaybackStopped is too early with around 200ms, it also seems to cut 200ms of the sound. If I increase the initial latency to 1000, the message disapears. I have tried with and without Dispose in the Playbackstopped event, and all the tricks I can think of, with no luck.
Setting the latency really low, solves the problem. But that is a balance, where I risk stuttering sound, if the computer is a little slow.
Then I can add silence. I would like to avoid this, since I actually need to know, when it finished playing. And what if problem is this is solved in later versions, then I need to rewrite my software, to have exact knowledge of when its finished.
One sound could be "RUN", and then Playbackstopped starts a timer.
Yes, probably should revisit the calculation of when to raise PlaybackStopped to avoid this issue. Out of interest, what output device are you using? Is it WaveOutEvent?
Mark HeathI am using DirectSoundOut. Format is MP3 or WAV. Would WaveOutEvent perform differently?
Frank JensenCurrently I am facing a different problem. Maybe I should put this in a different thread... Anyway, using WaveInEvent to capure some sound, and redirect it to a WaveOutEvent. I do this only to be able to control balance and volume. I start recording once, and it keeps running. Then I play different things with on the WaveOutEvent, using play and stop. But if I stop a sound, and then play a different sound, I get a snipped of the first sound. Like its stored in some buffer. I tried ClearBuffer, with no luck. Maybe the DataAvailable event is not called, if its not full. And starting the next sound, filles it up, including the old snipped stored. But how do I clear that? Would apreciate if you could help with a workaround :)