0 Comments

There’s no such thing as a “best practice”. At least in software development. I’ve read countless articles on the “best practices” for database design, software architecture, deployment, API design, security, etc and it’s pretty clear that (a) no one can agree on what the best practices actually are, and (b) last year’s “best practice” frequently turns into this year’s “antipattern”.

I can however understand the motivation for wanting to define “best practices”. We know that there are a lot of pitfalls in programming. It’s frighteningly easy to shoot yourself in the foot. It makes sense to know in advance of starting out, how best to avoid making those mistakes. We’re also in such a rapidly moving industry, that we’re frequently stepping out into uncharted territory. Many of the tools, frameworks and technologies I’m using today I knew nothing about just five years ago. How can I be expected to know what the best way to use them is? I frequently find myself googling for “technology X best practices”.

So-called “best practices” emerge not by being definitively proved to be the “best” way of doing something, but simply by virtue of being better than another way. We tried approach A and it went horribly wrong after two weeks, so we tried approach B and got further. Now approach B is the “best practice”. But before long we’re in a real mess again, and now we’re declaring that approach C is the “best practice”.

A better name for “best practices” would be “better practices”. They emerge as a way of avoiding a particular pitfall. And because of this, it’s very unhelpful to present a set of “best practices” without also explaining what problem each practice is intended to protect us from.

When we understand what problem a particular best practice is attempting to save us from, it allows us to make an informed decision on whether or not that “best practice” is relevant in our case. Maybe the problem it protects us from is a performance issue at massive scale. That may not be something that needs to concern us on a project that will only ever deal with small amounts of data.

You might declare that a best practice is “create a nuget package for every shared assembly”. Or “only use immutable classes”. Or “no code must be written without writing a failing test for it first”. These might be excellent pieces of guidance that can greatly improve your software. But blindly followed without understanding the reasoning behind them, they could actually result in making our codebase worse.

Most “best practices” are effective in saving you from a particular type of problem. But often they simply trade off one type of problem for another. Consider a monolithic architecture versus a distributed architecture. Both present very different problems and challenges to overcome. You need to decide which problems you can live with, and which you want to avoid at all costs.

In summary, even though I once created a Pluralsight course with “best practices” in the title, I don’t really think “best practices” exist. At best they are a way of helping you avoid some common pitfalls. But don’t blindly apply them all. Understand what they are protecting you from, and you will be able to make an informed decision about whether they apply to your project. And you may even be able to come up with even “better practices” that meet the specific needs and constraints of your own project.

0 Comments

When you play audio with NAudio, you pass an audio stream (which is an implementation of IWaveProvider) to the output device. The output device will call the Read method many times a second asking for new buffers of audio to play. When the Read method returns 0, that means we’ve reached the end of the stream, and playback will stop.

So for example, the audio file reader classes in NAudio such as WavFileReader, Mp3FileReader, AudioFileReader etc, all implement IWaveProvider and their Read method returns the number of bytes asked for until the the end is reached, after which it returns 0 and playback will stop. Because these classes also inherit from WaveStream they also support repositioning, so if you repositioned back to the start just before reaching the end, you’d be able to keep playback going for longer than the duration of the file.

But some classes in NAudio produce never-ending streams of audio. For example the SignalGenerator class is an ISampleProvider which continuously produces a signal such as a sine wave. If you pass this to a playback device (you can pass either IWaveProvider or ISampleProvider to an output device in NAudio), playback will continue indefinitely because you’ve given it a never-ending stream.

There are also some classes in NAudio whose behaviour is configurable. The MixingSampleProvider and BufferedWaveProvider are like this. If their ReadFully property is set to true, they will always return the number of bytes/samples asked for in the Read method. This is off by default with MixingSampleProvider, meaning that once you’ve reached the end of all the inputs to the mixer, playback will end. But if you turn it on, then it means you’ll continue to play silence even though the mixer has no more inputs. This can be useful if you want to dynamically add more inputs to the mixer later on.

With BufferedWaveProvider, ReadFully is set to true by default. That’s because it’s designed to help you play audio you receive over the network. If there’s audio in the buffer, it gets played, but if there’s no audio in the buffer (maybe because of poor network connectivity), we don’t want to stop playback, we just want to play silence until we’ve received some audio to play.

It’s possible to take an never-ending stream and give it a finite duration. A good example of this is the OffsetSampleProvider which can “take” a set duration of audio from an input sample provider. There are some extension methods in NAudio to make this easy to use. So for example, to get 5 seconds of a 500Hz sine wave you can do this:

var sine5Seconds = new SignalGenerator() { Gain = 0.2, Frequency = 500 }.Take(TimeSpan.FromSeconds(5));

If you play this, it will stop after 5 seconds:

using (var wo = new WaveOutEvent())
{
    wo.Init(sine5Seconds);
    wo.Play();
    while (wo.PlaybackState == PlaybackState.Playing)
    {
        Thread.Sleep(500);
    }
}

You can also go the other way, and make  make a regular wave provider endless either by extending it with silence, or by looping it. I’ve written before on how to implement looping in NAudio.

Hopefully this has helped clarify why in NAudio playback sometimes doesn’t stop when you wanted it to (you accidentally created an endless stream), or how you can can keep a single playback session running continuously without needing to keep opening and closing the output device. This allows you to easily implement a fire and forget audio playback engine where you can play sounds by adding them to a mixer, which will just produce a never-ending stream of silence if no sounds are currently active. So never-ending streams can be a good thing.

“But let justice roll on like a river, righteousness like a never-ending stream!” Amos 5:24

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments

Azure offers a number of “databases” as a service. There’s SQL Azure, which is a great choice when you want a relational database that you can use with Entity Framework or query with your own custom SQL.

There’s also Azure DocumentDb which is Microsoft’s “NoSQL” offering – giving you the benefits of a schemaless document database approach, while retaining powerful querying capabilities. If you’re familiar with using NoSql databases like MongoDb then it’s an ideal choice.

And then there’s Azure Table Storage, which Microsoft describes as a “NoSQL key/attribute store”. This means that it’s got a schemaless design, and each table has rows that are made up of key value pairs. Because it’s schemaless, each row doesn’t have to contain the same keys.

But compared to DocumentDb, Table Storage is very rudimentary. There are only two indexed columns on our tables – RowKey and PartitionKey, which together uniquely identify each row. So it’s not great if you need to perform complex queries. In DocumentDb, each “document” is a JSON object that can have an arbitrarily deep structure. But in Table Storage, you’d need to manage serialization to JSON (or XML) yourself if you wanted to store complex objects or arrays inside columns.

So, compared to the more powerful capabilities of DocumentDb, Table Storage might seem a bit pointless. Does it have any advantages?

Well the main big advantage is cost. Even for the cheapest DocumentDb database, you are looking at $23 a month for a single “collection” plus a small amount more for the total data stored. This may be reasonable for production apps, but for small experiments or prototypes, it can add up quite quickly and probably force you to try to store the documents for multiple apps in the same DocumentDb collection which could get messy.

With Table Storage, you don’t pay a monthly fee, but instead pay very small amounts for the amount of data stored and the number of transactions you use. This means that for prototypes and proof of concepts it works out insanely cheap. It’s currently the only database choice in Azure that follows the “serverless” pricing model of paying only for what you actually use.

This means that Table Storage can be a good choice as a “poor man’s” NoSql database. I’ve been using it on a number of projects where I just need a couple of simple tables, and where my querying requirements are not complex. Another advantage is that it integrates very nicely with Azure Functions, making it a great choice if you want to quickly build out a “serverless” prototype.

Of course, Table Storage is also designed to be able to store huge volumes of data. So another use case it’s great for is for storing logs and diagnostic information. In fact, a several Azure services (like Azure Functions for example) use Table Storage as the backing store for their own logs.

So although Table Storage may not be the most exciting or flexible database option, if you know what it’s strengths are, it can be a useful tool to have at your disposal. There’s a helpful tutorial available if you want to get started with Table Storage using C#.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

It seems every time I attempt to implement something simple in CSS I always get stuck. This time I wanted to create a div with rounded corners and a heading with a different coloured background. Something like this:

image

I attempted to build this by creating an outer div with rounded corners, and then an inner heading div with a green background. The trouble is, the inner div ends up drawing outside its containing div:

image

What’s the solution? Well it took me a long time to track down, and initially I was using a hack by rounding the corners of the heading div with a slightly smaller corner radius.

But the trick I was missing was setting overflow: hidden; on the outer div. This prevents the heading div from rendering anything outside its parent.

Here’s a JSFiddle with the way I solved it. Hope it proves helpful to someone. And let me know in the comments if there was a better way I should have tackled this.

In serverless architectures, it’s common to build your website as a SPA, meaning that you only need static website hosting. A common choice for this is Amazon S3, but if you have an Azure subscription, you may be wondering if you can use Azure Blob Storage instead.

Well, if you create a Blob Storage Account, and make a container and set it’s blobs to be publicly accessible, then you can navigate to your site by visiting a URL like: https://mystorageaccount.blob.core.windows.net/mycontainer/index.html

And this works just fine, but there are a few key limitations you need to be aware of.

First of all, although you can point a custom domain at an Azure Blob Storage account, you can’t upload your own SSL certificate. This is the top requested feature for Azure Blob Storage, and Microsoft’s response indicates they do intend to resolve this as a high priority.

Second of all, we’d rather not need to specify index.html explicitly, but Azure Blob storage doesn’t have default document support. This is another highly requested feature and so hopefully Microsoft will also address this soon, along with removing the need to specify the container name in the URL.

And there are a few other features that are desirable for a static website host, such as the ability to serve a custom 404 page.

So in its current state, Azure Blob Storage isn’t ideal for hosting a static website.

Fortunately, there is now a workaround available in the shape of Azure Functions proxies. What you can do is create an Function App and give it a proxy that passes on requests to Blob Storage.

Let’s see how to do that. In our function app, because it’s still in preview, we need to explicitly enable proxies:

image

And I’ll create two proxies. The first, will redirect the base URL to the index.html in my static hosting blob container:

image

And the second will forward every additional request for any other path directly to the blob container. That’s achieved with the special {*restOfPath} syntax which matches any path.

image

With that in place, I now can directly access my static website by visiting the URL of the function app.

What’s more, Azure Function apps support custom domains, so I can go in and configure some domain name I own to point at the function app.

image

Azure Function Apps also allow you to upload SSL certificates, so as well as solving the default document issue, and removing the need for the container name in the path, this also is a way to use HTTPS on custom domains. So with Azure Functions proxies you can work around most of the major limitations of static hosting on Azure Blob Storage.

Obviously you will pay for a function execution for each file that’s requested, but remember that you get 1 million free function executions a month, so you may not end up paying anything for most websites.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.