NAudio Output Devices
NAudio supplies wrappers for four different audio output APIs. In addition, some of them support several different modes of operation. This can be confusing for those new to NAudio and the various Windows audio APIs, so in this post I will explain what the four main options are and when you should use them.
IWavePlayer
We’ll start off by discussing the common interface for all output devices. In NAudio, each output device implements IWavePlayer
, which has an Init
method into which you pass the Wave Provider that will be supplying the audio data. Then you can call Play
, Pause
and Stop
which are pretty self-explanatory, except that you need to know that Play only begins playback. You should only call Init
once on a given instance of an IWavePlayer
. If you need to play something else, you should Dispose
of your output device and create a new one.
You will notice there is no capability to get or set the playback position. That is because the output devices have no concept of position – they just read audio from the WaveProvider
supplied until it reaches an end, at which point the PlaybackStopped
event is fired (see this post for more details). Alternatively, you can ignore PlaybackStopped
and just call Stop
whenever you decide that playback is finished.
You may notice a Volume
property on the interface that is marked as [Obsolete]
. Don’t use it. There are better ways of setting the volume in NAudio. For example look at the WaveChannel32
class or in NAudio 1.5 onwards, the SampleChannel
class.
Finally there is a PlaybackState
property that can report Stopped
, Playing
or Paused
. Be careful with Stopped
though, since if you call the Stop
method, the PlaybackState
will immediately go to Stopped
but it may be a few milliseconds before any background playback threads have actually exited.
WaveOut
WaveOut
should be thought of as the default audio output device in NAudio. If you don’t know what to use, choose WaveOut
. It essentially wraps the Windows waveOut
APIs, and is the most universally supported of all the APIs.
The WaveOut
object allows you to configure several things before you get round to calling Init
. Most common would be to change the DeviceNumber
property. –1 indicates the default output device, while 0 is the first output device (usually the same in my experience). To find out how many WaveOut
output devices are available, query the static WaveOut.DeviceCount
property.
You can also set DesiredLatency
, which is measured in milliseconds. This figure actually sets the total duration of all the buffers. So in fact, you could argue that the real latency is shorter. In a future NAudio, I might reduce confusion by replacing this with a BufferDuration
property. By default the DesiredLatency is 300ms, which should ensure a smooth playback experience on most computers. You can also set the NumberOfBuffers
to something other than its default of 2 although 3 is the only other value that is really worth using.
One complication with WaveOut
is that there are several different “callback models” available. Understanding which one to use is important. Callbacks are used whenever WaveOut
has finished playing one of its buffers and wants more data. In the callback we read from the source wave provider and fill a new buffer with the audio. It then queues it up for playback, assuming there is still more data to play. As with all output audio driver models, it is imperative that this happens as quickly as possible, or your output sound will stutter.
New Window
This is the default and recommended approach if you are creating a WaveOut
object from the GUI thread of a Windows Forms or WPF application. Whenever WaveOut
wants more data it posts a message that is handled by the Windows message pump of an invisible new window. You get this callback model by default when you call the empty WaveOut
constructor. However, it will not work on a background thread, since there is no message pump.
One of the big benefits of using this model (or the Existing Window model) is that everything happens on the same thread. This protects you from threading race conditions where a reposition happens at the same time as a read.
note: The reason for using a new window instead of an existing window is to eliminate bugs that can happen if you start one playback before a previous one has finished. It can result in WaveOut picking up messages it shouldn’t.
Existing Window
Existing Window is essentially the same callback mechanism as New Window, but you have to pass in the handle of an existing window. This is passed in as an IntPtr
to make it compatible with WPF as well as WinForms. The only thing to be careful of with this model is using multiple concurrent instances of WaveOut
as they will intercept each other’s messages (I may fix this in a future version of NAudio).
note: with both New and Existing Window callback methods, audio playback will deteriorate if your windows message pump on the GUI thread has too much other work to do.
Function Callback
Function callback was the first callback method I attempted to implement for NAudio, and has proved the most problematic of all callback methods. Essentially you can give it a function to callback, which seems very convenient, these callbacks come from a thread within the operating system.
To complicate matters, some soundcards really don’t like two threads calling waveOut
functions at the same time (particularly one calling waveOutWrite
while another calls waveOutReset
). This in theory would be easily fixed with locks around all waveOut
calls, but some audio drivers call the callbacks from another thread while you are calling waveOutReset
, resulting in deadlocks.
I am pretty sure that in NAudio 1.5 all the potential deadlocks have been chased out. Until recently Function callbacks have been the only way to play audio through WaveOut
on a non-GUI thread in NAudio. But with NAudio 1.5 there is now also the Event callback, which, once it has has an appropriate amount of testing, will become the recommended alternative to windowed callbacks for those wanting to play audio on background threads.
Event Callback
New to NAudio 1.5, this is currently implemented in the WaveOutEvent
class, although I may try to think of a way of making event callbacks an option within the WaveOut
class. This is implemented similar to WASAPI and DirectSound. A background thread simply sits around filling up buffers when they become empty. To help it respond at the right time, an event handle is set to trigger the background thread that a buffer has been returned by the soundcard and is in need of filling again.
note: WaveOut
also supports a No Callback mode, which currently isn’t implemented in NAudio, but it would be very easy to change WaveOutEvent
to support it. Essentially instead of waiting on an event handle, you sleep for a bit (half the buffer duration would be sensible) and then wake up to see if any of the buffers need filling.
note 2: there is one final WaveOut callback model I haven’t mentioned and that is thread callback. I don’t think it offers anything the other models don’t have, but you give it a thread ID and it posts messages to the thread. The thread would need to call Application.Run
to instantiate a message pump, and so if implemented, this method could also enable the Window callback methods to work on a background thread.
DirectSoundOut
DirectSound is a good alternative if for some reason you don’t want to use WaveOut
since it is simple and widely supported.
To select a specific device with DirectSound, you can call the static DirectSoundOut.Devices
property which will let you get at the GUID for each device, which you can pass into the DirectSoundOut
constructor. Like WaveO`ut, you can adjust the latency (overall buffer size),
DirectSoundOut
uses a background thread waiting to fill buffers (same as WaveOut
with event callbacks). This is a reliable and uncomplicated mechanism, but as with any callback mechanism that uses a background thread, you must take responsibility yourself for ensuring that repositions do not happen at the same time as reads (although some of NAudio’s built-in WaveStreams
can protect you from getting this wrong).
WasapiOut
WASAPI is the latest and greatest Windows audio API, introduced with Windows Vista. But just because it is newer doesn’t mean you should use it. In fact, it can be a real pain to use, since it is much more fussy about the format of the WaveProvider passed to its Init function and will not perform resampling for you.
To select a specific output device, you need to make use of the MMDeviceEnumerator
class, which can report the available audio “endpoints” in the system.
WASAPI out offers you a couple of configuration options. The main one is whether you open in shared or exclusive mode. In exclusive mode, your application requests exclusive access to the soundcard. This is only recommended if you need to work at very low latencies. You can also choose whether event callbacks are used. I recommend you do so, since it enables the background thread to get on with filling a new buffer as soon as one is needed.
Why would you use WASAPI? I would only recommend it if you want to work at low latencies or are wanting exclusive use of the soundcard. Remember that WASAPI is not supported on Windows XP. However, in situations where WASAPI would be a good choice, ASIO out is often a better one…
AsioOut
ASIO is the de-facto standard for audio interface drivers for recording studios. All professional audio interfaces for Windows will come with ASIO drivers that are designed to operate at the lowest latencies possible. ASIO is probably a better choice than WASAPI for low latency applications since it is more widely supported (you can use ASIO on XP for example).
ASIO Out devices are selected by name. Use AsioOut.GetDriverNames()
to see what devices are available on your system. Note that this will return all installed ASIO drivers. It does not necessarily mean that the soundcard is currently connected in the case of an external audio interface, so Init can fail for that reason.
ASIO drivers support their own customised settings GUI. You can access this by calling ShowControlPanel()
. Latencies are usually set within the control panel and are typically specified in samples. Remember that if you try to work at a really low latency, your input WaveProvider’s Init function needs to be really fast.
ASIO drivers can process data in a whole host of native WAV formats (e.g. big endian vs little endian, 16, 24, 32 bit ints, IEEE floats etc), not all of which are currently supported by NAudio. If ASIO Out doesn’t work with your soundcard, post a message on the NAudio discussion groups, as it is fairly easy to add support for another format.
Comments
Hi mark !
AnonymousKudos on the great post !! I am trying to use Naudio to play sound in both speakers and headsets (when plugged in) at the same time, by giving the same wave file to different output devices. Do you think that will work ? Appreciate your help !
yes, but create a separate file reader for each device, otherwise the sound will end up being sliced up and played partly out of each device
Mark HI've used NAudio on several projects and it's awesome. I'm currently struggling with a requirement that I use the default WPF mediaelement control to play a video, but allow the user to choose the playback device from any active device. Most of the examples I see assume NAudio will be handling both device selection and playback.
JasonBIt seems from most of my research that mediaelement only wants to play on the default device, and that setting the default device programmatically is kept intentionally difficult (understandably so).
Is it possible to use NAudio to select the device and have the mediaelement do the playback?
Trying to play sound on both speakers and headset. I think i am a step away. Here is the sample code :
Anonymouspublic void detectDevices()
{
int waveOutDevices = WaveOut.DeviceCount;
switch (waveOutDevices)
{
case 1:
var wave1 = new WaveOut();
wave1.DeviceNumber = 0;
playSound(0);
break;
case 2:
var wave2 = new WaveOut();
wave2.DeviceNumber = 0;
playSound(0);
var wave3 = new WaveOut();
wave3.DeviceNumber = 1;
playSound(1);
break;
}
}
public void playSound(int deviceNumber)
{
disposeWave();// stop previous sounds before starting
waveReader = new NAudio.Wave.WaveFileReader(fileName);
var waveOut = new NAudio.Wave.WaveOut();
waveOut.DeviceNumber = deviceNumber;
output = waveOut;
output.Init(waveReader);
output.Play();
}
public void disposeWave()
{
if (output != null)
{
if (output.PlaybackState == NAudio.Wave.PlaybackState.Playing)
{
output.Stop();
output.Dispose();
output = null;
}
}
if (wave != null)
{
wave.Dispose();
wave = null;
}
}
case eSelector.startIncomingRinging:
fileName = ("Ring.wav");
detectDevices();
What is the use of Read(float [] buffer, int offset, int count) method
Ketan Joshi Muntazeerof OffsetSampleProvider class, what values of offset and count should I use to read entire data.
The usual approach is to repeatedly call Read until it returns 0, at which point you've got all the data. The count is how many samples you want copied into the buffer. Usually offset is 0, meaning start copying into the first element in buffer, but if for some reason you want to write part-way through you can use offset
Mark HeathI want to read data of 10 seconds to generate a waveform of only the selected 10 second. I am using sample.SkipOver = TimeSpan.FromSeconds(0)
Ketan Joshi Muntazeersample.Take = TimeSpan.FromSeconds(10) , to play this sample I using a WaveOut object, if i read the entire data using Read() method , and generate a waveform using the same, will it be exactly what I want
yes, you'll get just that 10 seconds of audio
Mark HeathIm unable to change the device number (ive done -0.2 and -1 and 0 and nothgin has changed)
Aiden wilkinsIf I read well here one can play the same soundfile in the speakers and the headphones, so my guess is I could play mp3-1 thru the soundcard and another mp3-2 thru my headphones ? This way I can pre-listen/preview the mp3-2 file as DJ's do ?
Filip D. WilliamsIs Line-Out also a Device which can be used to play an MP3 ?