0 Comments

I am very happy to announce that earlier this week I received the Microsoft MVP Award. It’s a real privilege and honour to be associated with such an awesome group of technical experts who share so much useful information with the community. It’s a little daunting in a way, since whenever I see the MVP logo by someone’s name, I take that as a sign they know what they’re talking about! So I hope I won’t let the team down!

MVP_Logo_Horizontal_Preferred_Cyan300_CMYK_72ppi

I actually think it’s a great time to be a developer on the Microsoft platform at the moment. I’m a big fan of Azure and F#, the new Docker for Windows capabilities look awesome, and though the launch of .NET Core has not been as smooth as it might have been, the move towards open sourcing everything is a huge step in the right direction. It’s also nice to see C# evolving as well.

Over the past 10 years I’ve attempted to contribute towards the Microsoft development community in various ways – maintaining this blog for over 10 years now, creating over 20 open source projects, (with NAudio being my greatest hit!), 11 Pluralsight courses, a bunch of user group talks and a YouTube channel.

Sometimes people express surprise at where I find the time to do all this stuff. The answer is that I simply enjoy exploring new programming techniques and sharing what I’ve learned with others. In some ways, even the process of teaching is part of learning – it forces me to go deeper into a topic until I really understand it, and it also leaves behind a permanent record of what I learned so I can Google for it six months down the line when I’ve forgotten it again!

Looking back at the tools and technologies I’ve written about in the past, several have sadly faded away – Silverlight, CodePlex, Mercurial, and IronPython come to mind. But their place has been taken by the likes of Azure, F# and Docker, and of course I’ll continue to write about audio even though its not such a big part of my day to day programming any more.

Anyway, a big thanks to Microsoft for including me in their MVP program, as well to everyone who has read and commented on my blog here. And my challenge to you is, how are you going to share what you’ve learned with the community? Whether it’s starting a blog, or just emailing some tips and tricks to the other developers where you work, don’t keep all your knowledge to yourself.

0 Comments

Docker for Windows makes it super easy to get an IIS server up and running (if you’ve not tried Docker for Windows yet, check out my getting started guide). With the following PowerShell commands, we can get an IIS container running, discover it’s IP address, and launch it in a browser:

docker run -d -p 80 --name datatest1 microsoft/iis:nanoserver
$ip = docker inspect -f "{{.NetworkSettings.Networks.nat.IPAddress}}" datatest1
Start-Process -FilePath http://$ip

And we can see that our IIS instance is indeed up and running:

image

But how can we get our own HTML files into our container? Well, Docker gives us a variety of techniques. Let’s look at four.

Technique 1: Edit in the Container

The first technique is the least practical, but demonstrates a very powerful feature of Docker containers. We are not limited to running just one process in them. So while our datatest1 container is running IIS, we can also run PowerShell in it like this:

docker exec -it datatest1 powershell

This gives us an interactive command prompt inside our container.

So we can create our own index.html file in there:

echo "<html><body><h1>Hello World</h1></body></html>" > "c:\inetpub\wwwroot\index.html"
exit

If we refresh our browser, we can see our edit has worked:

image

Now clearly this would not be a practical way to construct a website, but it does demonstrate that you can connect into a running container and make any changes you need. This is a technique you might use while experimenting with a container, with a view to scripting your manual changes in a dockerfile (see technique 4 below) later.

Technique 2: Copy into a Container

The second technique is to use the docker cp command. This allows you to copy files locally into a container. So I made a local index.html file which I attempted to copy into my datatest1 container

docker cp index.html datatest1:c:\inetpub\wwwroot

But this fails with an error saying the file is in use. In fact, I couldn’t manage to copy any file anywhere while the container was running. I don’t know whether this is a limitation with Windows containers, or if there is a way to get this working, but it does at least work while the container is stopped. Unfortunately this will mean the container will also get a new IP address.

docker stop datatest1
docker cp index.html datatest1:c:\inetpub\wwwroot
docker start datatest1
$ip = docker inspect -f "{{.NetworkSettings.Networks.nat.IPAddress}}" datatest1
Start-Process -FilePath http://$ip

If instead of copying a single file, we want to copy the contents of a whole local folder called site into wwwroot, then I couldn’t find the right syntax to do this directly with docker cp, so I ended up changing local directory before performing the copy:

docker stop datatest1
push-location ./site
docker cp . datatest1:c:/inetpub/wwwroot
pop-location
docker start datatest1
$ip = docker inspect -f "{{.NetworkSettings.Networks.nat.IPAddress}}" datatest1
Start-Process -FilePath http://$ip

So while docker cp is a useful command to know, it still isn’t the smoothest experience.

Technique 3: Mount a Volume

This next technique is a really nice feature of Docker. Rather than transferring our data into the container, we can make a folder on our local machine visible inside the container by mounting a volume.

We do this with the –v switch on the docker run command, specifying the local folder we want to mount, and the location in which it should appear on the container.

There were a couple of quirks I ran into. First of all, the local path needs to be absolute, not relative, so I’m using Get-Location to get the current directory. And secondly, you can’t mount a volume on top of an existing folder (at least in Docker for Windows). So we sadly can’t overwrite wwwroot using this technique. But we could mount into a subfolder under wwwroot like this:

docker run -d -p 80 -v "$((Get-Location).Path)\site:c:\inetpub\wwwroot\site" --name datatest2 microsoft/iis:nanoserver

And we can see the results in a browser with a similar technique to before:

$ip = docker inspect -f "{{.NetworkSettings.Networks.nat.IPAddress}}" datatest2
Start-Process -FilePath http://$ip/site

Now the great thing is that we can simply modify our local HTML and refresh the browser and our changes are immediately visible.

So volumes are a really powerful technique, and really come into their own when your container needs to store data that needs to live beyond the lifetime of the container.

But there’s one final technique we need to consider for getting our HTML into our container, and that’s using a dockerfile.

Technique 4: Use a Dockerfile

While volumes are great for data, if you’re planning on deploying your website, you probably do want to bake the HTML into the container image directly. And that can be done by creating your own dockerfile. This is about the simplest sort of dockerfile you can create:

FROM microsoft/iis:nanoserver
COPY site C:/inetpub/wwwroot

All this dockerfile is saying is that our base image is the Microsoft IIS nanoserver image from DockerHub, and then we want to copy the contents of our local site directory into C:/inetpub/wwwroot.

With our dockerfile in place, we need to build an image with the docker build command, giving it a name (I chose datatest4:v1), and then we can create a container from that image with docker run, just as we did before.

docker build -t datatest4:v1 .
docker run -d -p 80 --name datatest4 datatest4:v1
$ip = docker inspect -f "{{.NetworkSettings.Networks.nat.IPAddress}}" datatest4
Start-Process -FilePath http://$ip

The great thing about this approach is that now we have an image of our website that we can deploy anywhere.

0 Comments

One of the most effective ways of learning any new technology is to use it a little bit each day. This way you take in far more than if you simply read tutorials or watch training videos. As an example, by attempting to solve the daily “Advent of Code” challenges in F# in December 2015 and 2016 I made huge strides forwards in my understanding of F#.

So when I came across Wes Bos’ 30 Day JavaScript Challenge, a free 30 day course where you build something in vanilla JavaScript each day, I though it was worth a go. And it proved to be a great decision.

Each day you get a short video to watch, and the associated Git repository contains both final solutions for each day, and a starting point, usually with the HTML and CSS already completed, and the JavaScript waiting for you to fill in.

What impressed me about this course was not only what a great teacher Wes is, but how fun the examples were. From making a drum kit on day one, through to doing cool stuff with the webcam and speech synthesis, he’s managed to make it a really engaging learning experience from start to finish.

Some of the new techniques I picked up were using some of the newer ES6 language features like const and let keywords and arrow functions (which work just fine in modern browsers), as well as how to interact with the DOM using things like document.querySelectorAll and addEventListener without having to fall back to JQuery for everything. There was also some good coverage of the various Array processing methods and helpful tips on debugging in the console.

But actually I probably learned most of all in this course from examining the CSS and HTML, which included techniques like flexbox, transitions and the use of data- attributes. Wes has clearly got a flair for the design side of web development, and the demo apps have got a polished look to them. I certainly picked up a lot of useful techniques.

In summary, I’m really glad I attempted this 30 day challenge. Most days only required 20-30 minutes. And whether or not it’s JavaScript that you want to learn, I’d encourage you to find a learning resource that has you writing code yourself as you progress through. That could be one of Wes’ courses, or a Pluralsight course like my More Effective LINQ course which includes programming challenges, or simply attempting problems like Project Euler or Advent of Code in a language of your choice. The amount of great learning resources out there is truly incredible and there’s sure to be something appropriate whatever you want to learn.

Next month I’m hoping to run through some of the Docker exercises at katacoda to help reinforce my knowledge of the Docker command line. Let me know in the comments below what you’re going to do.

0 Comments

The Todo-Backend project showcases the use of various different languages and programming frameworks to implement a simple backend for a todo list application. There are many community contributed implementations which is a great way to see a simple example of what’s involved in setting up a basic web API with each technology.

I noticed that there was no Azure Functions implementation in the list, and since Azure Functions is an ideal platform to rapidly prototype and host a simple web API, I thought I’d give it a try.

One really nice feature of the Todo-Backend project is that it has an online test runner which you simply need to point at your own web API and then keep implementing endpoints until all the tests pass.

Spoiler alert: you’ll end up creating a GET all and GET by id, a POST to create a new todo item, a PATCH to modify one, and a DELETE all and DELETE by id.

With Azure Functions, it’s possible to create one single function that responds to multiple HTTP verbs, but it’s a better separation of concerns to have a separate function per use case, much like you’d have a separate method in an ASP.NET MVC controller for each of these actions.

To keep things interesting, I decided I’d use a mix of languages, and implement two functions in C#, two in JavaScript and two in F#.

Creating a Function App

There are several ways to create a function app. I created a new resource group in the portal, and added a function app using the consumption plan. I then used the Azure Functions CLI tooling locally to create a new function app with func init. This initialises the current folder with the necessary files including a host.json file, an appSettings.json for local settings and gives you a Git repository complete with a .gitignore file configured for Azure Function Apps.

Function 1: GET All Todo Items (JavaScript)

The first function I needed to create was a get all todos method. I used func new to create a new function and chose JavaScript as the language and selected a HTTP triggered function.

There are a few things we’ll need to do in all the function.json files for our functions. The first is to set the authentication level to anonymous. This todo API is just a simple demo and so has no concept of users.

Second, we need to specify the allowed methods for this function. In this case it’s simply GET.

Third, we need to specify the route. Normally a HTTP triggered function derives its route from the function name, but we’re going to have several functions all sharing the /api/todos endpoint. So for this function, we just need to specify todos/ (the api prefix is automatically included)

Here’s the part of the function.json file describing the HTTP input binding for our get all todos function:

"bindings": [
{
  "authLevel": "anonymous",
  "type": "httpTrigger",
  "direction": "in",
  "name": "req",
  "methods": [
    "GET"
  ],
  "route": "todos/"
},

We also need to pick somewhere to store our todos. Now I did blog a while ago about how you could cheat and use memory caches with Azure Functions for very rough and ready prototyping, but I wanted a proper backing store, and Azure Table storage is an ideal choice because its cheap, extensible, and we already have an Azure Storage Account with every Function App anyway.

So my get all todos method also has an input binding which will take up to 50 rows from the todos table and return them in the todostable function parameter. Here’s the part of function.json for the table storage input binding:

{
  "type": "table",
  "name": "todostable",
  "tableName": "todos",
  "take": 50,
  "connection": "AzureWebJobsStorage",
  "direction": "in"
}

Now onto our function itself, and JavaScript really is a great language choice for this as the input data is almost in the ideal form for the HTTP response straight away. I decided to hide the PartitionKey and RowKey fields, but apart from that we pass straight through:

module.exports = function (context, req, todostable) {
    context.log("Retrieved todos:", todostable);
    todostable.forEach(t => { delete t.PartitionKey; delete t.RowKey; });
    res = {
        status: 200,
        body: todostable 
    }; 
    context.done(null, res);
};

One important thing before this will work. We must set up CORS if we want to use the Todo-Backend test client and spec runner against our function app. We can do this in the portal, and we simply need to add http://todobackend.com to the list:

image

I also set up a daily usage quota for my function app. This means that if any malicious user attempts to put me out of pocket by hammering my function app with requests, it will simply stop running until the next day, which is absolutely fine for this function app:

func-todo-backend-quota

So that gets us past the first test for the backend-todo API. To pass the next test we need to support POST.

Function 2: POST a New Todo Item (F#)

For my POST function, I decided to use F#. The function bindings are similar to the GET function, except now we only allow the POST method, and we are using an output binding to table storage instead of an input binding.

"bindings": [
  {
    "authLevel": "anonymous",
    "name": "req",
    "type": "httpTrigger",
    "direction": "in",
    "methods": [
      "POST"
    ],
    "route": "todos/"
  },
  {
    "name": "res",
    "type": "http",
    "direction": "out"
  },
  {
    "type": "table",
    "name": "todosTable",
    "tableName": "todos",
    "connection": "AzureWebJobsStorage",
    "direction": "out"
  }
],

I used func new to give me a templated HTTP triggered F# function, which gives us a good starting point for a function that uses Newtonsoft.Json for serialization. Our function needs to deserialize the body of the request, give the new todo item an Id (I used a Guid), save it to table storage, and then serialize the todo item back to JSON for the response.

I declared the following Todo type, which could serve the dual purpose of being used as a Table Storage entity (which needs a RowKey and PartitionKey). In the world of F# an optional integer would normally be expressed as an int option, but that doesn’t play nicely with Newtonsoft JSON serialization, so I switched to a regular Nullable<int>.

type Todo = {
    [<JsonIgnore>]
    PartitionKey: string;
    [<JsonIgnore>]
    RowKey: string;
    id: string;
    title: string;
    url: string;
    order: System.Nullable<int>;
    completed: bool
}

The templated function uses an async workflow which is a syntax I’m still not completely at home with, but with the exception of needing a special helper to await a regular Task (instead of a Task<T>), it wasn’t too hard.

The rows in our table use a hardcoded PartitionKey of TODO. It could use a user id if it was storing todos for many people. I also decided for simplicity to create the url for each todo item at creation time and store it in table storage, but arguably that should be done at retrieval time.

Here’s the Run method for our POST todo item function:

let Run(req: HttpRequestMessage, log: TraceWriter, todosTable: IAsyncCollector<Todo>) =
    async {        
        let! data = req.Content.ReadAsStringAsync() |> Async.AwaitTask
        log.Info(sprintf "Got a task: %s" data)
        let todo = JsonConvert.DeserializeObject<Todo>(data)
        let newId =  Guid.NewGuid().ToString("N")
        let newUrl = req.RequestUri.GetLeftPart(UriPartial.Path).TrimEnd('/') + "/" + newId;
        let tableEntity = { todo with PartitionKey="TODO"; RowKey=newId; id=newId; url=newUrl }
        let awaitTask = Async.AwaitIAsyncResult >> Async.Ignore 
        do! todosTable.AddAsync(tableEntity) |> awaitTask
        log.Info(sprintf "Table entity %A." tableEntity)
        let respJson = JsonConvert.SerializeObject(tableEntity);
        let resp = new HttpResponseMessage(HttpStatusCode.OK)

        resp.Content <- new StringContent(respJson)
        return resp
    } |> Async.RunSynchronously

So that means we’ve passed two out of the 16 tests. The next test is to DELETE all todos.

Function 3: DELETE all Todo Items (C#)

For our third function, we need to respond to a DELETE method by deleting all Todo items! This isn’t something that’s straightforward to do with the Azure Functions table storage bindings at the moment, but this article from Anthony Chu pointed me in the right direction.

We create a Table Storage input binding in the same way we did for our GET function, and in the Run method signature, we use a CloudTable object to bind to the table.

Unfortunately, there is no method on CloudTable that deletes all rows from the table. We could delete the whole table and recreate it, and that might be the quickest way, but I opted to perform a simple query to get all Todos, and loop through and delete them individually. Obviously this assumes there are only a small number of items in the table, and I’m pretty sure that the TableQuery has a max number of rows it returns anyway.

Here’s the code for my delete function:

class Todo : TableEntity
{
    public string title { get; set; }
}

public static HttpResponseMessage Run(HttpRequestMessage req, CloudTable todosTable, TraceWriter log)
{
    log.Info("Request delete all todos");
    var allTodos = todosTable.ExecuteQuery<Todo>(new TableQuery<Todo>())
                    .ToList();
    foreach(var todo in allTodos) {
        log.Info($"Deleting {todo.RowKey} {todo.title}");
        var operation = TableOperation.Delete(todo);
        todosTable.Execute(operation);
    }

    return req.CreateResponse(HttpStatusCode.OK);
}

Now at this stage we pass a few more of the 16 tests, but we fail because when the url we set up for each todo is called, there’s nothing listening. We need a get by id function next.

Function 4: GET Todo Item by Id (JavaScript)

We’re back to JavaScript for this function, and there’s a slight difference in our routes now, as we expect an id, so the route for this function is todos/{id}. We’re also going to use a table input binding again, but this time, instead of getting all rows, we’re going to say that the partitionKey must be TODO and the rowKey must be the id we were passed in the URL. So the function binding for table storage looks like this:

{
  "type": "table",
  "name": "todo",
  "tableName": "todos",
  "partitionKey": "TODO",
  "rowKey": "{id}",
  "take": 50,
  "connection": "AzureWebJobsStorage",
  "direction": "in"
}

The implementation is pretty simple. If the row was found we return a 200 after hiding the row and partition keys, otherwise we return a 404 not found.

module.exports = function (context, req, todo) {
    context.log("Retrieving todo", req.params.id, todo);
    if (todo) {
        delete todo.RowKey;
        delete todo.PartitionKey; 
        res = {
            status: 200,
            body: todo 
        }; 

    }
    else {
        res = {
            status: 404,
            body: todo 
        }; 

    }
    context.done(null, res);
};

Next up we need to support modifying an existing todo with the PATCH method.

Function 5: PATCH Todo Item (C#)

The PATCH function borrows techniques we saw with the POST and GET by id functions. Like GET by id, we’ll use a route of todos/{id} and a Table Storage input binding set up to find that specific row. We’ll bind to that as a custom strongly typed Todo object.

And like the POST method, our HTTP request and response both contain JSON and we need to write to table storage.

So I read the input JSON as a JObject, and we will support modification of the title, order and completed properties only. Once we’ve patched our Todo object, we can update it in table storage with a TableOperation.Replace operation.

I then serialize an anonymous object to the output which is an easy way of hiding the row and partition keys.

Here’s the code for my patch todo item function:

#r "Microsoft.WindowsAzure.Storage"
#r "Newtonsoft.Json"

using Microsoft.WindowsAzure.Storage.Table;
using System.Net;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

public class Todo: TableEntity
{
    public string id { get; set; }
    public string title { get; set; }
    public string url { get; set; }
    public int? order { get; set; }
    public bool completed { get; set; }
}

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, string id, Todo todo, CloudTable todoTable, TraceWriter log)
{
    log.Info($"Patching {id}");
    if (todo == null) return new HttpResponseMessage(HttpStatusCode.NotFound);

    var patch = await req.Content.ReadAsAsync<JObject>();
    log.Info($"Patching with id={patch["id"]}|title={patch["title"]}|url={patch["url"]}|order={patch["order"]}|completed={patch["completed"]}|");
    if (patch["title"] != null)
        todo.title = (string)patch["title"];
    if (patch["order"] != null)
        todo.order = (int)patch["order"];
    if (patch["completed"] != null)
        todo.completed = (bool)patch["completed"];
    //todo.ETag = "*";
    var operation = TableOperation.Replace(todo);
    await todoTable.ExecuteAsync(operation);

    var resp = new HttpResponseMessage(HttpStatusCode.OK);
    
    var json = JsonConvert.SerializeObject(new { todo.id, todo.title, todo.order, todo.completed, todo.url });
    resp.Content = new StringContent(json);
    return resp;
}

We’re almost there! Just need to support deleting an individual todo item.

Function 6: DELETE Todo Item by Id (F#)

So finally we need to support deleting todo items by Id and we’ll use F# for this method. Once again we’ll need a route of todos/{id} and we’re using a table storage input binding of a CloudTable. This will allow us to use the TableOperation.Delete operation. We do need to specify an ETag of * for this to work. The operation doesn’t care if the item to be deleted doesn’t exist so this method always returns a 204 no content.

Here’s the Run method:

let Run(req: HttpRequestMessage, id: string, todosTable: CloudTable, log: TraceWriter) =
    async {
        log.Info(sprintf "Request delete of todo %s." id)
        let todo = TableEntity("TODO", id)
        todo.ETag <- "*"
        let operation = TableOperation.Delete(todo)
        let awaitTask = Async.AwaitIAsyncResult >> Async.Ignore 
        do! todosTable.ExecuteAsync(operation) |> awaitTask;
        // returns success even if TODO doesn't exist
        return req.CreateResponse(HttpStatusCode.NoContent);
    } |> Async.RunSynchronously

Testing it Out

It’s quite easy to test your function app either locally or in the cloud. If you want to test locally, then you’ll need to provide a connection string for the local functions host to use. You can put this into your appSettings.json file by calling:

func settings add AzureWebJobsStorage "DefaultEndpointsProtocol=https;AccountName=<YOURSTORAGEAPP>;AccountKey=<YOURACCOUNTKEY>"

And then you can just use func host start to start the runtime, and try calling your API with Postman.

But it’s also really easy to test in the cloud. If you’ve created a Function App, then you can set up Git continuous deployment which is in my opinion the easiest way to deploy Function Apps. I’ve got all the code for this example hosted on GitHub, so I simply had to point at my public GitHub repo:

func-todo-backend-deploy

Once you’ve done this, simply pushing to your repository allows you to go live (you can choose another branch if you don’t want every change to master to go live).

This means I can run the unit tests directly against my Azure Functions app simply by visiting this link. If all is well, we’ll see 16 tests passing:

func-todo-backend-specs

One thing this does highlight is how poor Azure Functions can be at a cold start. Often after leaving my app dormant for a while it will take 20-30 seconds before the first API call responds.

image

But once it’s warmed up, the test suite will complete in a few seconds. Hopefully the Azure Functions team will continue to work on improving cold start times, but one trick you can use is to use a scheduled function to keep your app warm, if your use case requires it.

Summary

In this example we’ve seen how easy it is to create a Azure Function App that implements a simple API. We can use multiple functions for each HTTP method and use custom routes to present the API endpoints we want to. We can mix and match different programming languages and Table Storage offers a cheap and easy way to persist data.

This whole API took only a couple of hours to implement, and the great thing is that it will cost very little to run. In fact, it’s very likely that all the usage will fall within the generous monthly free grant of 1 million executions and 400,000GB-s that you get with Azure Functions.

If you want to see the full code and bindings for all six functions, you can find it here on my GitHub account: https://github.com/markheath/func-todo-backend

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

In my Azure Functions Fundamentals Pluralsight course, I focused on deploying functions using Git. In many ways this should be thought of as the main way to deploy your Azure Functions apps because it really is so straightforward to use.

Having said that, there are sometimes reasons why you might not want to take this approach. For example, your Git repository may contain source code for several different applications, and while there are ways to help Kudu understand which folder contains your functions, it may feel like overkill to clone loads of code unrelated to the deployment.

Another reason is that you might want to have a stricter separation of pushing code to Git and deploying it. Of course you can already do this when deploying with Git – you can have a separate remote that you push to for deployment, or even configure a certain branch to be the one that is monitored. But nevertheless you might want to protect yourself from a developer issuing the wrong Git command and accidentally sending some experimental code live.

So in this post, I want to show how we can use PowerShell to call the Kudu REST API to deploy your function app on demand by pushing a zip file.

First of all, we want to create a zip file containing our function app. To help me do that, I’ve created a simple helper function that zips up the contents of our functions folder with a few exclusions specified:

Function ZipAzureFunction(
    [Parameter(Mandatory = $true)]
    [String]$functionPath,
    [Parameter(Mandatory = $true)]
    [String]$outputPath
)
{
  $excluded = @(".vscode", ".gitignore", "appsettings.json", "secrets")
  $include = Get-ChildItem $functionPath -Exclude $excluded
  Compress-Archive -Path $include -Update -DestinationPath $outputPath
}

And now we can use this to zip up our function:

$functionAppName = "MyFunction"
$outFolder = ".\deployassets"
New-Item -ItemType Directory -Path $outFolder -Force
$deployzip = "$outFolder\$functionAppName.zip"

If (Test-Path $deployzip) {
    Remove-Item $deployzip # delete if already exists
}

ZipAzureFunction "..\funcs" $deployzip

Next, we need to get hold of the credentials to deploy our app. Now you could simply download the publish profile from the Azure portal and extract the username and password from that. But you can also use Azure Resource Manager PowerShell commands to get them. In order to do this, we do need to sign into Azure, which you can do like this:

# sign in to Azure
Login-AzureRmAccount

# find out the current selected subscription
Get-AzureRmSubscription | Select SubscriptionName, SubscriptionId

# select a particular subscription
Select-AzureRmSubscription -SubscriptionName "My Subscription"

Note that this does prompt you to enter your credentials, so if you want to use this unattended, you would need to set up a service principal instead or just use the credentials from the downloaded publish profile file.

But having done this, we can now get hold of the username and password needed to call Kudu:

$resourceGroup = "MyResourceGroup"
$functionAppName = "MyFunctionApp"
$creds = Invoke-AzureRmResourceAction -ResourceGroupName $resourceGroup -ResourceType Microsoft.Web/sites/config `
            -ResourceName $functionAppName/publishingcredentials -Action list -ApiVersion 2015-08-01 -Force

$username = $creds.Properties.PublishingUserName
$password = $creds.Properties.PublishingPassword

Now we have the deployment credentials, and the zip file to deploy. The next step is to actually call the Kudu REST API to upload our zip. We can do that using this helper function:

Function DeployAzureFunction(
    [Parameter(Mandatory = $true)]
    [String]$username,
    [Parameter(Mandatory = $true)]
    [String]$password,
    [Parameter(Mandatory = $true)]
    [String]$functionAppName,
    [Parameter(Mandatory = $true)]
    [String]$zipFilePath    
)
{
  $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,$password)))
  $apiUrl = "https://$functionAppName.scm.azurewebsites.net/api/zip/site/wwwroot"
  Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method PUT -InFile $zipFilePath -ContentType "multipart/form-data"
}

Which we can then easily call:

DeployAzureFunction $username $password $functionAppName $deployzip

This works great, but there is one caveat to bear in mind. It won’t delete any existing files in the site/wwwroot folder. It simply unzips the file and overwrites what’s already there. Normally this is fine, but if you had deleted a function so it wasn’t in your zip file, the version already uploaded would remain in place and stay active.

There are a couple of options here. One is to use the VFS part of the Kudu API to specifically delete a single function. Unfortunately, it won’t let you delete a folder with its contents, so you have to recurse through and delete each file individually before deleting the folder. Here’s a function I made to do that:

Function DeleteAzureFunction(
    [Parameter(Mandatory = $true)]
    [String]$username,
    [Parameter(Mandatory = $true)]
    [String]$password,
    [Parameter(Mandatory = $true)]
    [String]$functionAppName,
    [Parameter(Mandatory = $true)]
    [String]$functionName
)
{
  $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,$password)))
  $apiUrl = "https://$functionAppName.scm.azurewebsites.net/api/vfs/site/wwwroot/$functionName/"
  
  $files = Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method GET
  $files | foreach { 
    
    $fname = $_.name
    Write-Host "Deleting $fname"
    # don't know how to get the etag, so tell it to ignore by using If-Match header
    Invoke-RestMethod -Uri $apiUrl/$fname -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo); "If-Match"="*"} -Method DELETE
  }

  # might need a delay before here as it can think the directory still contains some data
  Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method DELETE
}

It sort of works, but is a bit painful due to the need to recurse through all the contents of the function folder.

Another approach I found here is to make use of the command Kudu API and instruct it to delete either our whole function app folder at site/wwwroot, or a specific function as I show in this example:

Function DeleteAzureFunction2(
    [Parameter(Mandatory = $true)]
    [String]$username,
    [Parameter(Mandatory = $true)]
    [String]$password,
    [Parameter(Mandatory = $true)]
    [String]$functionAppName,
    [Parameter(Mandatory = $true)]
    [String]$functionName
)
{
  $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,$password)))
  $apiUrl = "https://$functionAppName.scm.azurewebsites.net/api/command"
  
  $commandBody = @{
    command = "rm -d -r $functionName"
    dir = "site\\wwwroot"
  }

  Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method POST `
        -ContentType "application/json" -Body (ConvertTo-Json $commandBody) | Out-Null
}

This is nicer as it’s just one REST method, and you could use it to clear out the whole wwwroot folder if you wanted a completely clean start before deploying your new zip.

The bottom line is that Azure Functions gives you loads of deployment options, so there’s bound to be something that meets your requirements. Have a read of this article by Justin Yoo for a summary of the main options at your disposal.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.