0 Comments

I’ve released an update to NAudio as it’s been a while since the last one. This one is mostly bug fixes and a few very minor features:

  • MidiFile can take an input stream
  • WaveOut device –1 can be selected allowing hot swapping between devices
  • AsioOut exposes FramesPerBuffer
Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments

One of the most daunting things for developers new to Git is understanding how they can undo commit mistakes. And the git reset command is a powerful tool to help with that. But it can seem confusing. What does the magical git reset --hard HEAD~1 invocation that you may have seen on stack overflow actually do? And what is the difference between a “hard” and a soft reset?

In this video, I demonstrate three situations in which the git reset command can help:

The video demonstrates in more detail, but the three scenarios I discuss are:

1. Throwing away a junk commit. If you’ve just committed to your local repository and now you decide that you want to throw that away, you can use git reset --hard HEAD~1 to jump the current branch back to the previous commit (that’s what HEAD~1 means – go back one commit from the current position), and reset the working directly back to the state it was in at the time of that commit (that’s what the --hard flag means)

2. Committing too early.If you’ve just committed but realize that you should have changed just one or two things first, the git reset command can be used without the --hard flag to put the current branch back where it was, you can simply use git reset HEAD~1  and your working directory won’t change at all. That way you can make a few final tweaks before re-performing the commit.

3. Committing on the wrong branch.Sometimes you make a commit and then realise that you’d intended to create a new branch for that commit, rather than working directly on the master branch. Getting out of this situation is nice and easy. First, keep your new commit safe by creating a new branch that points at it. For example git branch feature1.  And now, we’re still on the master branch, so we can point that back to the previous commit with git reset --hard HEAD~1. And then you can checkout your new feature branch and continue working on that.

Note:There is one important caveat with these git reset tips – if you’ve already pushed to the server, things can get tricky. Others may have pulled and built on top of commits you want to throw away. In these situations it’s usually better to use git revert to create a new commit that undoes the mistake. That’s generally a lot safer.

By the way the tools I use in the video are posh git and GitViz.

0 Comments

Azure Functions comes with three levels of authorization. Anonymous means anyone can call your function, Function means only someone with the function key can call it, and Admin means only someone with the admin key can call it.

What this means is that to secure our Azure functions we must pre-share the secret key with the client. Once they have that key they can call the function, passing it as a HTTP header or query parameter. But this approach is not great for serverless single page apps. We don’t want entrust the secret to a client app where it might get compromised, and we don’t have an easy way of getting it to the client app anyway.

Instead, it would be better if the users of our single page application could log in to an authorization server, and for that to issue us with access tokens (preferably JSON web tokens) that we can pass to our functions. They can then check that the access token is valid, and proceed to accept or deny the function request. And this is of course the way OAuth 2.0 works.

However, at the moment there isn’t an easy way to enable verification of access tokens in Azure Functions. You could make your function anonymous and then write the verification yourself, but it’s generally not a good idea to implement your own security.

I wanted to see if there was an easier way.

And it turns out that there is a feature in App Service called “Easy Auth”. Azure Function apps support this by virtue of being built on top of App Service.

Easy Auth is an on-off switch. If you turn it on for your App Service, then everyincoming HTTP request must be authorized. If you’re not authorized you’ll get redirected to log in at the authorization server.

Easy Auth supports several identity providers, including Facebook, Google, Twitter, Microsoft and Azure Active Directory. You can only pick one though (however if the one you pick is Azure AD B2C, then that can support additional social identity providers).

The downside of using EasyAuth is that your whole site requires login. You can’t have any pages that can be viewed without needing to provide credentials.

But the benefit of this approach is that it provides a relatively simple way to get things secured. And if we combine it with Azure AD B2C, we can allow users to self sign-up for our application and support things like password resets, two factor auth, email verification and so on. This is a great “serverless” approach to authentication, delegating to a third party service and keeping our own applic

So I set myself the challenge of integrating a simple SPA that calls through to an Azure Functions back-end with AD B2C. I can’t promise this is the only or best way to do this, but here’s the steps I took to get it working.

Step 1 – Create an Azure AD B2C Tenant

First of all you’ll need to create an Azure AD B2C tenant. This can be done through the portal, and detailed instructions are available here so I won’t repeat them here. You’ll need to make sure you associate it with a subscription.

Step 2 – Create a Sign Up Or Sign In Policy

Next we need a sign-up or sign-in policy. This allows people to sign in, but also to self register for your application. If you want to control users yourself then you’d just need a sign-in policy. You can create one of these policies in the portal in the settings for your AD B2C tenant.

image

The policy lets us select one or more identity provider. Generally you’ll want to enable basic email and password logins, but you can also add Facebook, Twitter, Google etc, so this is a great option if you want to support multiple social logins. (learn how to configure them here).

image

You can specify “sign-up” attributes which is what information you require from a new user signing up. That might just be their full name and country, but could also include some custom attributes that you define.

You can choose which claims will be included within the access tokens (JWTs), which can make life easier for your application getting hold of useful user info such as their display name without needing to look it up separately.

You can turn on multi-factor authentication, which gives an excellent second level of security for users who have a verified phone number.

And you can also customize the UI of the login page. This is important, because your users will log in at a login.microsoftonline.com page that doesn’t look like it has anything to do with your app by default:

image

Step 3 – Create an AD B2C Application

Finally you need to create a new application in AD to represent the application you will be protecting with Azure AD. Give it a name, select that you want to include a web app, and then you need to provide a Reply URL.

image

The reply URL is a special URL that includes the name of your function app. So if your app is called myfuncapp, the reply URL will be https://myfuncapp.azurewebsites.net/.auth/login/aad/callback

Once you save this application, it will be given an application id (which is a GUID). That’s important as you’ll need it to set up Easy Auth, which is the next step.

Step 4 – Set Up Azure Functions Proxies

The way we’re going to make our single page app magically work with our back end functions is for both the static content and the functions to be served up by our function app. And we can do that by using function proxies. I actually wrote an article about how to use proxies to serve static web content, but there’s a gotcha that’s worth calling out here. Since we’re proxying web requests to static content andserving up functions from the same function app, we need to make sure that the proxies don’t intercept any calls to /api which is where the functions will be going.

So here’s how I do it. I set up three proxies.

The first has a route template of /, and points at my index.html in blob storage. e.g. https://myapp.blob.core.windows.net/web/index.html

The second has a route template of /css/{*restOfPath} and points at https://myapp.blob.core.windows.net/web/css/{restOfPath}

And the third has a route template of /scripts/{*restOfPath} and points at https://myapp.blob.core.windows.net/web/scripts/{restOfPath}

This way my site can have static content in css and scripts folders and a single index.html file, while the /api route will still go to any other functions I have.

Step 5 – Configure CORS

Our static website will be calling through to the functions, so lets make sure that CORS is set up. In the Azure Functions CORS settings, add an entry for https://myfuncapp.azurewebsites.net (obviously use the actual URI of your function app, or any custom domain you have pointing at it)

Step 6 – Enable Easy Auth

We enable Easy Auth by going to our Azure Function app settings screen and selecting Authentication/Authorization, and turning App Service Authentication on.

And we’ll also say that when a request is not authenticated, it should log in with Azure Active Directory.

image

Next we need to set up the Azure Active Directory authentication provider, for which we need to selcet “advanced” mode. There are two pieces of information that we need to provide. First is the client ID, which is the application ID of our application we created earlier in AD B2C. The second (issuer URL) is the URL of our sign up or sign in policy from AD B2C. This can be found by looking at the properties of the sign up or sign in policy in AD B2C.

image

Once we’ve set that up, any request to either the static web content (through our proxy) or to call a function will require us to be logged in. If we try to access the site without being logged in, we’ll end up getting redirected to the login page at microsofonline.com.

Step 7 – Calling the functions from JavaScript

So how can we call the function from JavaScript? Well, it’s pretty straightforward. I’m using the fetch API and the only special thing I needed was to make sure I set the credentials property to include, presumably to make sure that the auth cookies set by AD B2C were included in the request.

fetch(functionUrl, {     method: 'post', 
    credentials: 'include',
    body: JSON.stringify(body),
    headers: new Headers({'Content-Type': 'text/json'})
})
.then(function(resp) {     if (resp.status === 200) { 
    // ...

Step 8 – Accessing the JWT in the Azure Function

The next question is how can the Azure Function can find out who is calling? Which user is logged in? And can we see what’s inside their JWT?

To answer these questions I put the following simple code in my C# function to examine the Headers of the incoming HttpRequestMessage binding.

foreach(var h in req.Headers)
{
    log.Info($"{h.Key}:{String.Join(",", h.Value)}");
}

This reveals that some special headers are added to the request by Easy Auth. Most notably X-MS-CLIENT-PRINCIPAL-ID contains the guid of the logged in user, and helpfully their user name is also provided. But more importantly X-MS-TOKEN-AAD-ID-TOKEN contains the (base 64 encoded) JWT itself. To decode this manually you can visit the excellent jwt.io, and you’ll see that the information in it includes all the attributes that you asked for in your sign up and sign in policy, which might include things like the user’s email address or a custom claim.

X-MS-CLIENT-PRINCIPAL-NAME:Test User 1
X-MS-CLIENT-PRINCIPAL-ID:7e9be1af-6943-21d6-9ae1-5c78c11ff756
X-MS-CLIENT-PRINCIPAL-IDP:aad
X-MS-TOKEN-AAD-ID-TOKEN:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUz...

Unfortunately, Azure Functions won’t do anything to decode the JWT for you, but I’m sure there are some NuGet packages that can do this, so no need to write that yourself.

Should I Use This?

I must admit it didn’t feel too “easy” setting all this up. I took a lot of wrong turns before finally getting something working. And I’m not sure how great the end product is. It is a fairly coarse grained security approach (all web pages and functions require authentication), and having the functions returning redirect requests rather than unauthorized response codes feels hacky.

What would be really great is if if Azure Functions offered bearer token validation as a first class authentication option at the function level. I’d like to say that my function is protected by bearer tokens and give it the well known configuration of my authorization server. Hopefully something along these lines will be added to Azure Functions in the near future.

Of course another option would just be to set your functions to anonymous and examine the Authorization header yourself. You would need to be very careful that you properly validated the token though, making sure the signature is valid and it hasn’t expired. That’s probably reason enough to avoid this option because you can be sure that if you try to implement security yourself, you’ll end up with a vulnerable system.

Maybe there’s a better way to integrate Azure Functions and AD B2C. Let me know in the comments if there is. Chris Gillum who’s the expert in this area has a great two part article on integrating a SPA with AD B2C (part 1, part 2) although it isn’t explicitly using Azure Functions, so I’m not sure whether all the configuration options shown can be used with Function Apps (that’s an experiment for another day).

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

Serverless architectures offer many benefits, including not having to manage and maintain servers, the ability to move quickly and rapidly create prototypes, automatic scaling to meet demand, and a pricing model where you pay only for what you use.

But serverless doesn’t dictate what sort of database you should choose. So in this post, I want to consider the database offerings available in Azure, and how well (or not) they might fit into a serverless architecture.

Azure offers us SQL Database (which is essentially SQL Server in the cloud), CosmosDb (which is their NoSql offering), and Table Storage, which is a very simplistic key value data store. And in fact, Azure is also now offering managed PostgreSQL and MySql databases as well, but I’ll mostly focus on the first three choices.

No More Server Management

The first thing to point out, is that all of these are “serverless” databases in the sense that you are not maintaining the servers yourself. You have no access to the VMs they run on, and you don’t need to install the OS or database. You don’t need to manage disks to make sure they don’t run out of space. All of that low-level stuff is hidden for you and these databases are effectively Platform as a Service offerings.

Automatic Scaling

Another classic serverless benefit – automatic scaling, also effectively comes baked into these offerings. Yes, you may have to pay more for higher numbers of “DTUs” (for SQL Database) or “RUs” (for CosmosDb), but I don’t need to worry about how this is achieved under the hood. Whether Azure is using one super powerful server or many smaller servers operating in a cluster is completely transparent to the end user.

Consumption Based Pricing

One serverless selling point that most of these databases don’t offer, is consumption based pricing. Both SQL Database and CosmosDb expect you to reserve a certain amount of “units” of processing power which you pay for whether or not you use them. SQL Database starts at a very reasonable £4 a month for the smallest database with 5 DTUs and 2GB. Whereas Cosmos Db requires us to spend at least £17 a month for the minimum of 400 RUs. So although we’re not talking huge amounts, it is something you’d need to factor in, especially if you wanted to make use of many databases for different staging and testing environments.

The exception to this rule is Azure Table Storage. Table Storage charges us for how much data we store (around £0.05 per GB) and how many transactions (£0.0003 per 10,000 transactions). So for an experimental prototype where we have a few thousand rows and a few thousand transactions per month, we are talking a very small amount. Of course, the down-side of table storage is that it is extremely primitive – don’t expect any powerful searching capabilities.

Rapid Prototyping

Another reason why serverless is popular is that it promotes rapid prototyping. With the aid of a Functions as a Service offering like Azure Functions, you can get something up and running very quickly. This means that schemaless databaseshave the upper hand over traditional relational databases, as they give you the flexibility and freedom to rapidly evolve the schema of your data without the need to run “migrations”.

So CosmosDb and Table Storage might be a more natural fit than SQL when prototyping with serverless. It is also easier to integrate them with Azure Functions, as there are built-in bindings, (although I suspect SQL Database bindings aren’t too far behind).

Direct Client Access

Finally, many serverless advocates talk about allowing the client application to talk directly to the database. This is quite scary from a security perspective – we’d want to be very sure that the security model is tightly locked down so a user can only see and modify data that they own.

Ideally, in a serverless architecture the database should also be able to send push notifications directly to the client, alerting them to new rows in tables or modifications of data from other users. I believe this is something that Google Firebase can offer.

However, as far as I’m aware, none of these Azure database offerings are particularly designed to be used in this way. So for now, you’d probably use something like Azure Functions as the gateway to the data, instead of letting your clients directly reach into the database.

Which Should I Use?

So we haven’t really got much closer to deciding which of the Azure database offerings you should use in a serverless application. And that’s because, as so often in software development, the answer is “it depends”. It depends on what your priorities are – do you need powerful querying, do you need to easily evolve your schema, is cost the all-important consideration, do you need advanced features like encryption at rest, or point-in-time restore?

Here’s a simple table I put together for a serverless app I was working recently, to help me decide between three database options. As you can see, there was no obvious winner. I opted to start with Table Storage, with a plan to migrate to one of the others if the project proved to be a success and needed more powerful querying capabilities.

SQL Database

CosmosDb

Table Storage

Querying

*** (very powerful)

** (good)

* (poor)

Schema Changes

* (difficult)

*** (trivial)

** (easy)

Cost

** (reasonable)

** (reasonable)

*** (super cheap)

Azure Functions Integration

* (not built in)

** (included)

** (included)

Tooling

*** (excellent)

** (OK)

** (OK)

Let me know in the comments what database you’re using for your serverless applications, and whether it was a good choice.

0 Comments

First off, full disclosure – I create courses for Pluralsight, so I stand to benefit if you become a customer. But since they are currently running a special promotion, I thought it might be a good time to explain why I am very glad to recommend them to you.

 

1) Seeing is a great way to learn

There are lots of ways to increase your programming skills, including books, blogs, attending conferences and hackathons, but one of the great things with Pluralsight courses is that you get to actually watch while someone uses the technology they are teaching. Sometimes with programming, seeing the finished code isn’t enough – we need to see the route taken to get there, so watching someone working step by step through a demo, explaining what they are doing as they go along is invaluable. Subscribers also get to download the source code for all demos, so you can always try to code it yourself while you watch, which I can recommend as a very effective way to cement what you are learning.

2) The catalogue is huge and diverse

Pluralsight has been around for a long time now, and multiple new courses are released most days. It means that almost whatever technology you want to learn, there will be something there for you. Whether you’re interested in the very latest SPA framework, or even if you’re stuck in legacy land working with something like WinForms, you’ll find there’s a choice of courses available.

3) Learn from recognized industry experts

One of the things that attracted me to Pluralsight in the first place was the quality of authors they had on board. Renowned conference speakers like Scott Allen, John Papa, or Dan Wahlin, as well as experts in their specialist domains like Troy Hunt on security, Julie Lerman on Entity Framework. It’s also a platform in which you discover some fantastic teachers you may not have come across before – a few whose courses I’ve particularly enjoyed recently are Elton Stoneman, Barry Luijbregts, Kevin Dockx, and Wes Higbee.

4) Paying will motivate you

In these days when so much technical content is available for free, it can be very tempting to decide that you can get by without paying anything. And to a certain extent that is possible. But by paying for a subscription not only will that mean you get a certain quality bar (which you don’t get when just randomly searching for YouTube or blog tutorials), but the very fact you are paying motivates you to learn in a way that free content doesn’t. It’s a bit like if you pay for piano lessons or gym membership – it adds that little bit extra incentive to actually do something and get some return on your investment.

5) Learning is fun, and good for your career

Finally, who wouldn’t want to improve their skills and learn new stuff? We have the privilege of working in an extremely interesting and fast-paced industry. By watching even just one course a month, you’ll pick up all kinds of helpful new techniques, which will improve you as a developer, making you more effective in your current role as well as more hireable if you are looking to move on.

So that’s my five reasons. Why not sign up and give it a try while the special offer is still on.