• Posted in:
  • F#

Several years ago I did a Yahtzee kata in Python which was a nice simple problem to help me pick up a few new techniques. I thought I’d give the same thing a try in F# as I’ve not done quite as much F# as I’d like in recent months, and I want to get back into the swing of things ready for this year’s Advent of Code (last year’s videos available here)!

In general, I followed similar approaches to with Python, although it’s easier to work with Lists than Tuples for my dice and test cases in F# since Tuples are not actually enumerable sequences in F#. One distinct advantage F# had was how easy it was to use partial application of functions to build my strategies, but the disadvantage is that unlike Python, there seems to be no easy way to get the name of a function, so I created a simple tuple structure of name and strategy function to make my test output readable.

First of all I needed a few helper functions. highestRepeated was perhaps the most fiddly to create in F#. This function looks for any dice that are repeated at least minRepeats times and if there’s more than one, it needs to tell us which the highest value is. So if you rolled [2;2;2;5;5] then it should return 2 if minRepeats is 3 and 5 if minRepeats is 2.

Here’s what I came up with, although I feel there must be a way to simplify it a bit. I made several versions, but all of them were of similar complexity. The fact that List.max needs at least one element to work doesn’t help

let highestRepeated dice minRepeats =
    let repeats = dice |> List.countBy id |> List.filter (fun (_,n) -> n >= minRepeats) |> List.map fst 
    match repeats with | [] -> 0 | _ -> List.max repeats

The ofAKind function uses highestRepeated to implement the 2/3/4 of a kind scoring strategies, which we can partially partially apply this function to generate (we’ll see that in a minute).

let ofAKind n dice =
    n * highestRepeated dice n

Next up is sumOfSingle which you sum all the dice with the specified value. I’m finally getting used to using operators such as equals (=) as functions in F#.

let sumOfSingle selected dice =
    dice |> Seq.filter ((=) selected) |> Seq.sum

I also made a helper function to score high and low straights, which is nice and easy since lists can be directly compared for equality so if I pass target as the list [1;2;3;4;5] it can compare that directly to the sorted list of dice.

let straight target score dice =   
    if List.sort dice = target then score else 0

And finally I needed to test for Yahtzee itself – all five dice the same value. This was another one that I think could perhaps be made a little more succinct, but here’s what I came up with:

let yahtzee dice =
    if Seq.length dice = 5 && Seq.length (Seq.distinct dice) = 1 then 50 else 0

Now we have all the pieces to build our list of scoring strategies, which are tuples of a name and a function. Notice how we can partially apply sumOfSingle and ofAKind to cut down on declaring additional functions like I did in Python. It means that the F# solution is around a dozen lines shorter than the Python one.

The final piece was the suite of Unit Tests, which were copied over from my Python solution, although F# type inference flagged up that I had a few untested functions, so I added some more test cases:

let testCases = [
        ([1;2;3;4;5], 1, Ones)
        ([1;2;3;4;5], 2, Twos)
        ([3;2;3;4;3], 9, Threes)
        ([3;2;3;4;3], 4, Fours)
        ([5;5;5;4;3], 15, Fives)
        ([3;2;3;4;3], 0, Sixes)
        ([1;2;3;4;5], 0, Pair) // no pairs found
        ([1;5;3;4;5], 10, Pair) // one pair found
        ([2;2;6;6;4], 12, Pair) // picks highest
        ([2;3;1;3;3], 6, Pair) // only counts two
        ([2;2;6;6;6], 18, ThreeOfAKind) 
        ([2;2;4;6;6], 0, ThreeOfAKind) // no threes found
        ([5;5;5;5;5], 15, ThreeOfAKind) // only counts three
        ([6;2;6;6;6], 24, FourOfAKind) 
        ([2;6;4;6;6], 0, FourOfAKind) // no fours found
        ([5;5;5;5;5], 20, FourOfAKind) // only counts four
        ([1;2;5;4;3], 15, SmallStraight)
        ([1;2;5;1;3], 0, SmallStraight)
        ([6;2;5;4;3], 20, LargeStraight)
        ([1;2;5;1;3], 0, LargeStraight)
        ([5;5;5;5;5], 50, Yahtzee)
        ([1;5;5;5;5], 0, Yahtzee) 
        ([1;2;3;4;5], 15, Chance)

Now one thing I really need to do is get to grips with one of the F# unit testing frameworks. For this code, which I developed in LinqPad, I just created my own simple test case runner, but I’d be interested to hear recommendations for what I should be using.

let runTest (dice, expected, (name, strategy)) =
    let score = strategy dice
    let message = sprintf "testing with %s on %A" name dice
    (expected = score), message

let runAllTests =
    let results = testCases |> List.map runTest 
    results |> List.iter (fun (s,m) -> printf "%s %s" (if s then "PASS" else "FAIL") m)
    printfn "ran %d test cases" (List.length testCases)

The full source code is available as a GitHub Gist and as always, I welcome any feedback in the comments on how I can improve my code.


I’ve been having lots of fun recently kicking the tyres of Azure Functions, and one of the ideas I wanted to try out was to see if I could schedule tweets to be sent. That way I could periodically tweet links to the best of nearly 10 years of content here on my blog, or link to my Pluralsight courses.

Now of course there are already services out there like Buffer and Edgar that do this for you, but for a price. But like all good developers I relish the challenge of re-inventing the wheel, and thanks to the generous free grant with Azure Functions, I’ll be able to get my own poor man’s tweet scheduler up and running without paying a penny!

So how does it work?

Well, Azure functions supports scheduled tasks, so I could pick a certain time every day or few days and randomly select a link to share. But I wanted my tweets to go out at random times, and preferably go live during waking hours in the US which is where the bulk of my Twitter followers are from. That appears not to be possible with a simple cron expression, so I decided to use two functions.

Scheduling the Tweets

The first function runs daily at midnight (using a Timer trigger with a cron expression of 0 0 0 * * *) and it’s job is to randomly pick a tweet to send, and a time to send it.

How does it get the tweet? Well, I use a SaaS file binding for that. I can connect my Azure function to OneDrive, and set it up to read my list of tweets from a text file with a specified path.

And how does it send a scheduled tweet? Well, for that I decided to send a message to a queue, but delayed by a certain amount of time. I decided that I’d take 15:00 UTC which I think is roughly when the USA starts work and then pick a random number of minutes up to about 23:00 UTC to give my European readers a fighting chance of seeing it before going to bed. Unfortunately, the built-in Azure Functions Storage Queue’s output binding only gives us access to the CloudQueueMessage which doesn’t let us schedule a time. So I opted to simply write the code myself to connect to the queue and send it with a delay.

Let’s look at the code for this first function. First, of all, here’s the bindings section of my functions.json file:

  "bindings": [
      "name": "myTimer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 0 0 * * *"
      "type": "apiHubFile",
      "name": "tweetFile",
      "path": "AzureFunctions/tweets.txt",
      "connection": "onedrive_ONEDRIVE",
      "direction": "in"
  "disabled": false


As you can see, the timer contains the cron expression, and the OneDrive connection, which is of type “apiHubFile”, uses the “onedrive_ONEDRIVE” connection that you can set up to connect to your OneDrive (or DropBox / GoogeDrive if you prefer) in the portal by clicking “new” and authorizing your Azure Functions app to connect:


Also there’s the path, which is hardcoded to a simple text file in my OneDrive containing a list of tweets.

And what about the code? Well, here’s the timer function:

#r "Microsoft.WindowsAzure.Storage" 
using System;
using System.Configuration;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;

public static async Task Run(TimerInfo myTimer, string tweetFile, TraceWriter log)
    log.Info($"Tweet Scheduler Fired {DateTime.Now}, {myTimer.Schedule}, {myTimer.ScheduleStatus}, {myTimer.IsPastDue}");
    var tweets = tweetFile.Split("\n\r".ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
    var random = new Random();
    var tweet = tweets[random.Next(tweets.Length + 1)];

    var now = DateTime.UtcNow;
    var scheduled = new DateTime(now.Year, now.Month, now.Day, 15, 0, 0);
    scheduled = scheduled.AddMinutes(random.Next(60 * 8));
    if (scheduled < now) scheduled = scheduled.AddDays(1);

    await SendScheduled("tweets", tweet, scheduled, log);
    log.Info($"Scheduled {tweet} for: {scheduled}");

It’s not too complicated, except that to use the Azure storage classes you need a special reference as described here. The contents of my tweet file just come straight in as a string which I need to split into lines. And I created a custom SendScheduled function to actually perform the delayed sending of the message:

private static async Task SendScheduled(string queueName, string messageContent, DateTime scheduledTimeUtc, TraceWriter log)
    var connectionString = ConfigurationManager.AppSettings["AzureWebJobsStorage"]; 
    var storageAccount = CloudStorageAccount.Parse(connectionString);
    var queueClient = storageAccount.CreateCloudQueueClient();
    var queue = queueClient.GetQueueReference(queueName);
    var message = new CloudQueueMessage(messageContent);
    var delay = (scheduledTimeUtc - DateTime.UtcNow);
    if (delay < TimeSpan.Zero) delay = TimeSpan.Zero;
    log.Info($"Delay is {delay}");
    await queue.AddMessageAsync(message, null, delay, null, null);

Sending the Tweets

The next piece of the puzzle is listening to that queue and actually sending the tweets. The easy part is setting up a new function to listen on the queue.

The slightly harder part was sending the tweet. I decided to use the TweetInvi NuGet package to send the tweets. This is pretty easy to use once you’ve set up the necessary access keys. There are full instructions at the TweetInvi site, but the basic gist of it is that you need to go to apps.twitter.com and set up a new app. I called mine “Serverless Twit”:


Inside that app there are two sets of tokens and secrets you need to set up, but once you’ve done that, we can put them in our Function App settings, and use them with the TweetInvi library like this:

using System;
using System.Configuration;
using Tweetinvi;
using Tweetinvi.Core.Extensions;
using Tweetinvi.Core.Parameters;

public static void Run(string myTweet, TraceWriter log)
    log.Info($"Need to tweet: {myTweet}");

    var consumerKey = ConfigurationManager.AppSettings["TwitterConsumerKey"];
    var consumerSecret = ConfigurationManager.AppSettings["TwitterConsumerSecret"];
    var accessToken = ConfigurationManager.AppSettings["TwitterAccessToken"];
    var accessTokenSecret = ConfigurationManager.AppSettings["TwitterAccessTokenSecret"];
    Auth.SetUserCredentials(consumerKey, consumerSecret, accessToken, accessTokenSecret);
    var twitterLength = myTweet.TweetLength();
    if (twitterLength > 140)
        log.Warning($"Tweet too long {twitterLength}");

    var publishedTweet = Tweet.PublishTweet(myTweet);
    // by default TweetInvi doesn't throw exceptions: https://github.com/linvi/tweetinvi/wiki/Exception-Handling
    if (publishedTweet == null)
        log.Error($"Failed to publish");
        log.Info($"Published tweet {publishedTweet.Id}");

There’s one more thing we need to do, and that’s tell Azure Functions where to find the TweetInvi assemblies. This is done by creating a project.json file in our function folder (same place as the function.json) and adding a dependency on the version of the NuGet package we want:

  "frameworks": {
      "dependencies": {
        "TweetinviAPI": "1.1.1"

And that’s all there is to it! Now all my tens of twitter followers who are actually real people will have the delight of seeing a daily link to some random thing I’ve written or built in the past. Hopefully I won’t annoy them all into unfollowing me!


A few years back I created Skype Voice Changer Pro which I sold online, using Paddle as my payment provider. Whenever I make a sale (which isn’t too often these days thanks to an issue with recent versions of Skype), I get notified via a webhook. On receiving that webhook, I need to generate a license file and email it to the customer.

Azure Functions are perfect for this scenario. I can quickly create a secure webhook to handle the callback from Paddle, post a message onto a queue to trigger license generation, and then another queue to trigger sending an email.

Let’s see how we can set this up.

First of all, I need to create a new Azure Function, which I’ll create as a generic C# webhook:


The first thing I needed to do for my webhook, was edit the function.json file to remove the “webHookType” setting from the input httpTrigger. By default this will be set to “genericJson”, but that means we can only accept webhooks with JSON in their body. Paddle’s webhook comes in as x-www-form-urlencoded content, so removing the webHookType setting allows us to receive the HTTP request.


Now in our run.csx file we can use ReadAsFormDataAsync to get access to the form parameters.

Next, we need to validate the order. Azure Functions has got built-in webhook validation for GitHub and Slack, but not for Paddle, so we must do this ourselves. This is done using a shared secret which we can set in the App Service configuration and access through the ConfigurationManager in the same way you would with a regular web app.

If the order is valid, for now let’s just respond saying thank you. Paddle will include this text in their confirmation email to the customer.

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    var formData = await req.Content.ReadAsFormDataAsync();

    var orderId = formData["p_order_id"];
    var customerEmail = formData["customer_email"];
    var messageId = formData["message_id"];
    var customerName = formData["customer_name"];    

    log.Info($"Received {orderId}");

    var sharedSecret = ConfigurationManager.ConnectionStrings["PaddleSharedSecret"].ConnectionString;
    if (!ValidateOrder(sharedSecret, customerEmail, messageId, log))
        log.Info($"Failed to Validate!");
        return req.CreateResponse(HttpStatusCode.Forbidden, new {
            error = $"Invalid message id"

    return req.CreateResponse(HttpStatusCode.OK, new {
        greeting = $"Thank you for order {orderId}!"

And here’s the C# code to validate a Paddle webhook:

public static bool ValidateOrder(string sharedSecret, string customerEmail, string messageId, TraceWriter log)
    if (customerEmail == null || messageId == null) 
        log.Warning("Missing email or message id");
        return false;
    var input = HttpUtility.UrlEncode(customerEmail + sharedSecret);

    var md5 = System.Security.Cryptography.MD5.Create();
    byte[] inputBytes = Encoding.ASCII.GetBytes(input);
    byte[] hash = md5.ComputeHash(inputBytes);

    var sb = new StringBuilder();
    foreach (byte t in hash)
    var expectedId = sb.ToString();
    var success = (expectedId == messageId);
    if (!success) 
        log.Warning($"Expected {expectedId}, got {messageId}");
    return success;

Now the only thing left to do is to trigger the license generation and email, which we’ll do by posting a message to a queue. This is preferable to doing everything there and then in the webhook as queues allow our webhook to respond quickly and give us retrying if the email service is temporarily down. Breaking the process down into three small loosely coupled pieces will also give us maintainability and testability benefits.

We support send a message to a queue by going to the “Integrate” section in the portal and adding a new output binding of type Azure Storage Queue. This is the easiest to set up as there’s already a storage account associated with your function app you can use called “AzureWebJobsStorage” (although arguably you should create your own to keep your application data separate from the Azure Functions runtime’s data which resides in that storage account).


I’ll call my queue “orders”, and Azure Functions will automatically create it for me.


To send the message to the queue there are a number of options but I chose to create a strongly typed class “OrderInfo” and use a IAsyncCollector<T> parameter type for binding. This has the advantage of working with async functions (which mine is), but also supports sending 0 or more messages to the queue. We won’t be generating a license if the webhook is invalid so this is handy.

Here’s the key bits of the updated function:

public class OrderInfo 
    public string OrderId { get; set; }
    public string CustomerEmail { get; set; }
    public string CustomerName { get; set; }
    public string LicenseDownloadCode { get; set; }

public static async Task<object> Run(HttpRequestMessage req, IAsyncCollector<OrderInfo> outputQueueItem,  TraceWriter log)

    // ... order validation here
    // send on to the queue to generate license
    var orderInfo = new OrderInfo {
        OrderId = orderId,
        CustomerEmail = customerEmail,
        CustomerName = customerName,
        LicenseDownloadCode = licenceDownloadCode,
    await outputQueueItem.AddAsync(orderInfo);

    return req.CreateResponse(HttpStatusCode.OK, new {
        greeting = $"Thank you for order {orderId}!"

As you can see it’s super easy to send the message, just call AddAsync on the collector.

Finally, we need to handle messages in the queue. There’s a super feature in the portal where if you go to the Integrate tab for your function and select the queue output binding, there’s a button to set up a new function that is triggered by messages on that queue:


By clicking this, it will auto-fill in the bindings for my new function, giving me a new function all set up to read off the queue and log each message received:



Now it’s just a case of putting my license generation code into this function, as well as posting to another queue to trigger a third function which sends out the license email. Azure functions includes a built-in SendGrid binding which makes sending emails very easy (although I’m currently using a different service).

We can easily test our function using Postman (can’t use the portal in this case as it only sends JSON), and sure enough the webhook function is successful, and we can see in the logs for the license generation function that a message was indeed posted to the queue.



Using Azure Functions to handle webhooks is a big improvement from the quick and dirty code I originally created which simply did everything synchronously in a hidden API sat on my website. It meant my order webhook code was now coupled to the web server, which got in the way of me doing things like switching the website to use WordPress. With Azure functions I can move this webhook (and several others for things like letting users report errors from the app) out of my website into small loosely coupled functions.


The great thing about “serverless” code is that you don’t need to worry about servers at all. If my function gets invoked 10 times, all 10 invocations might run on the same server, or they might run on 10 different servers. I don’t need to know or care.

But suppose every time my function runs I need to look something up in a database. I might decide that it would be nice to temporarily cache the response in memory so that subsequent runs of my function can run a bit faster (assuming they run on the same server as the previous invocation).

Is that possible in Azure Functions? I did a bit of experimenting to see how it could be done.

To keep things simple, I decided to make a C# webhook function that counted how many times it had been called. And I counted in four ways. First, using a static int variable. Second, using the default MemoryCache. Third, using a text file in the home directory. Fourth, using a per-machine text file in the home directory. Let’s see what happens with each of these methods.

1. Static Integer

If you declare a static variable in your run.csx file, then the contents of that variable are available to all invocations of your function running on the same server. So if our function looks like this:

static int invocationCount = 0;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    log.Info($"Webhook triggered {++invocationCount}");
return ... }

And we call it a few times, then we’ll see the invocation count steadily rising. Obviously this code is not thread-safe, but it shows that the memory persists between invocations on the same server.

Unsurprisingly, every time you edit your function, the count will reset. But also you’ll notice it reset at other times too. There’s no guarantee that what you store in a static variable will be present on the next invocation. But it’s absolutely fine for temporarily caching something to speed up function execution.

2. MemoryCache

The next thing I wanted to try was sharing memory between two different functions in the same function app. This would allow you to share a cache between functions. To try this out I decided to use MemoryCache.Default.

static MemoryCache memoryCache = MemoryCache.Default;
public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    var cacheObject = memoryCache["cachedCount"];
    var cachedCount = (cacheObject == null) ? 0 : (int)cacheObject;   
    memoryCache.Set("cachedCount", ++cachedCount, DateTimeOffset.Now.AddMinutes(5));

    log.Info($"Webhook triggered memory count {cachedCount}");
    return ...

Here we try to find the count in the cache, increment it, and save it with a five minute expiry. If we copy this same code to two functions within the same Azure Function App, then sure enough they each can see the count set by the other one.

Again, this cache will lose its contents every time you edit your code, but its nice to know you can share in-memory data between two functions running on the same server.

3. On Disk Shared Across All Servers

Azure function apps have a %HOME% directory on disk which is actually a network share. If we write something into that folder, then all instances of my functions, whatever server they are running on can access it. Let’s put a text file in there containing the invocation count. Here’s a simple helper method I made to do that:

private static int IncrementInvocationCountFile(string fileName)
    var folder = Environment.ExpandEnvironmentVariables(@"%HOME%\data\MyFunctionAppData");
    var fullPath = Path.Combine(folder, fileName);
    Directory.CreateDirectory(folder); // noop if it already exists
    var persistedCount = 0;
    if (File.Exists(fullPath))
        persistedCount = int.Parse(File.ReadAllText(fullPath));
    File.WriteAllText(fullPath, (++persistedCount).ToString());
    return persistedCount;

We can call it like this:

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    var persistedCount = IncrementInvocationCountFile("invocations.txt");
    log.Info($"Webhook triggered {persistedCount}");
    return ...;

Obviously this too isn’t thread-safe as we can’t have multiple instances of our function reading and writing the same file, but the key here is that anything in this folder is visible to all instances of our function, even across different servers (although it was several days before I saw my test function actually run on a different server). And unlike the in memory counter, this won’t be lost if your function restarts for any reason.

4. Per Machine File

What if you want to use disk storage for temporary caching, but only want per machine? Well, each server does have a local disk, and you can write data there by writing to the %TEMP% folder. This would give you temporary storage that persisted on the same server between invocations of functions in the same function app. But unlike putting things in %HOME% which the Azure Functions framework won’t delete, things you put in %TEMP% should be thought of as transient. The temp folder would probably best be used for storing data needed during a single function invocation.

For my experiment I decided to use System.Environment.MachineName as part of the filename, so each server would maintain its own invocation count file in the %HOME% folder.

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
    var machineCount = IncrementInvocationCountFile(System.Environment.MachineName + ".txt");
    log.Info($"Webhook triggered {machineCount}");
    return ...;

And so now I can use Kudu to look in my data folder and see how many different machines my function has run on.

Should I do this?

So you can use disk or memory to share state between Azure Functions. But does that mean you should?

Well, first of all you must consider thread safety. Multiple instances of your function could be running at the same time, so if you used the techniques above you’d get file access exceptions, and you’d need to protect the static int variable from multiple access (the MemoryCache example is already thread-safe).

And secondly, be aware of the limitations. Anything stored in memory can be lost at any time. So only use it for temporarily caching things to improve performance. By contrast anything stored in the %HOME% folder will persist across invocations, restarts and different servers. But it’s a network share. You’re not really storing it “locally”, so it’s not all that different from just putting the data you want to share in blob storage or a database.

One of my favourite things about the new Azure Functions service is how easy it is to quickly prototype application ideas. Without needing to provision or pay for a web server, I can quickly set up a webhook, a message queue processor or a scheduled task.

But what if I’m prototyping a web-page and want a simple REST-like CRUD API? I decided to see if I could build a very simple in-memory database using nodejs and Azure Functions.

Step 0 – Create a function app

It’s really easy to get started with Azure Functions, but if you’ve not yet used them, you can sign in to functions.azure.com and you’ll be offered the opportunity to open an existing function app or create a new one:


It’s just a matter of choosing which region you want to host it in. By default this will create a function app on the Dynamic pricing tier, which is what you want since you only pay for what you use, and there’s a generous free monthly grant, so for most prototyping purposes it’s likely to be completely free.

Step 1 – Create a new NodeJS Function

From within the portal, select “New Function”, and choose the “Generic Webhook – Node” option.


Give your function a name relating to the resource you’ll be managing, as it will be part of the URL:


Step 2 – Enable more HTTP Verbs

In the portal, under your new function, select the “Integrate” tab, and then choose “Advanced Editor” to let you edit the function.json file directly.


You’ll want to add in a new “methods” property of the “httpTrigger” binding, containing all the verbs you want to support. I’ve added get, post, put, patch and delete


Step 3 – Add the In-Memory CRUD JavaScript File to your Function

I’ve made a simple in-memory database supporting CRUD in node.js. I’m not really much of a JavaScript programmer or a REST expert, so I’m sure there’s a lot of scope for improvement (let me know in the comments), but here’s what I came up with:


To get this into your function you have a few options. The easiest is in the “Develop” tab to hit “View Files”, and drag and drop my inMemoryCrud.js file straight in. Or you can just create a new file and paste the contents in.


If you look through you’ll see it supports GET of all items or by id, inserting with POST and deleting with DELETE, as well as replacing by id using PUT and even partial updates with PATCH. It optionally lets you specify required fields for POST and PUT, and there’s a seedData method for you to give your API some initial data if needed.

Obviously, since this is an in-memory database there are a few caveats. It will reset every time your function restarts, which will happen whenever you edit the code for your function, but can happen at other times too. Also if there are two servers running your function, they would both have their own in-memory databases, but Azure functions is unlikely to scale up to two instances for your function app unless you are putting it under heavy load.

Step 4 – Update index.js

Our index.js file contains the entry point for the function. All we need to do is import our in memory CRUD JavaScript file, seed it with any starting data we want, and then when our function is called, pass the request off to handleRequest, optionally specifying any required fields.

Here’s an example index.js:

Step 5 – Enabling CORS

If you’re calling the function from a web-page then you’ll need to enable CORS. Thankfully this is very easy to do, although it is configured at the function app level rather than the function level. In the portal, select function app settings and then choose Configure CORS.



In there, you’ll see that there are some allowed origins by default, but you can easily add your own here, as I’ve done for localhost:


Step 6 – Calling the API

The Azure function portal tells you the URL of your function which is based on your function app name and function name. It looks like this:


The code is Azure Function’s way of securing the function, but for a prototype you may not want to bother with it. You can turn it off by setting authLevel to anonymous in the function.json file for the httpTrigger object.


The other slight annoyance is that Azure Functions doesn’t support us using nice URLs like https://myfunctionapp.azurewebsites.net/api/Tasks/123 to GET or PUT a specific id. Instead we must supply the id in the query string: https://myfunctionapp.azurewebsites.net/api/Tasks?id=123

Is there a quicker way to set this up?

If that seems like quite a bit of work, remember that all an Azure function is, is a folder with a function.json file and some source files. So you can simply copy and paste the three files from my github gist into as many folders as you want and hey presto you have in-memory web-apis for as many resources as you need – just by copying and renaming folders – e.g. orders, products, users etc. If you’re deploying via git (which you should be), this is very easy to do.

What if I want persistence?

Obviously in-memory is great for simple tests, but at some point you’ll want persistence. There are several options here, including Azure DocumentDb and Table Storage, but this post has got long enough, so I’ll save that for a future blog post.