0 Comments Posted in:

The Microsoft.Azure.Storage.Blob NuGet package makes it really easy to work with Azure Blobs in .NET. Recently I was troubleshooting some performance issues with copying very large blobs between containers, and discovered that we were not copying blobs in the optimal way.

To copy files with the Azure Blob storage SDK, you first get references to the source and destination blobs like this:

var storageAccount = CloudStorageAccount.Parse(connectionString);
var blobClient =  storageAccount.CreateCloudBlobClient();

// details of our source file
var sourceContainerName = "source";
var sourceFilePath = "folder/test.zip";

// details of where we want to copy to
var destContainerName = "dest";
var destFilePath = "somewhere/test.zip";

var sourceContainer = blobClient.GetContainerReference(sourceContainerName);
var destContainer = blobClient.GetContainerReference(destContainerName);

CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(sourceFilePath);
CloudBlockBlob destBlob = destContainer.GetBlockBlobReference(destFilePath);

At this point, you might be tempted to copy the blob with code like this:

using (var sourceStream = await sourceBlob.OpenReadAsync())
using (var destStream = await destBlob.OpenWriteAsync())
{
    await sourceStream.CopyToAsync(destStream);
}

Or maybe with the convenient UploadFromStreamAsync method:

using (var sourceStream = await sourceBlob.OpenReadAsync())
{
    await destBlob.UploadFromStreamAsync(sourceStream);
}

However, what you are doing in both those examples is downloading the entire contents of the source blob and re-uploading them to the target blob. This was taking about 20 minutes for the files I was using.

Copying the quick way

Let's see how to copy the blob the quick way:

await destBlob.StartCopyAsync(sourceBlob);

Not only is it trivially simple, but it completes almost instantaneously. And that's because when you're copying a blob within a storage account, the underlying platform doesn't need to make a new copy - it can just update a reference internally.

The name of the method (StartCopyAsync) might make you feel a bit nervous. It implies that it will finish before the copy has been completed. And that can happen if you're copying between storage accounts.

Copying between storage accounts

To copy between storage accounts, you still use the StartCopyAsync method, but pass the Uri of the source blob. The documentation is a bit sparse, but here's how I was able to get it to work.

Notice in this example that we need a separate CloudStorageAccount and CloudBlobClient to the one for the source file. We then create a readable SAS token for the source blob with enough time for a copy to take place. And finally after calling StartCopyAsync, we do need to keep track of the copy progress by checking the CopyState of the blob (and getting the latest value with FetchAttributesAsync)

// create the blob client for the destination storage account
var destStorageAccount = CloudStorageAccount.Parse(destConnectionString);
var destClient = destStorageAccount.CreateCloudBlobClient();

// destination container now uses the destination blob client
destContainer = destClient.GetContainerReference(destContainerName);

// create a 2 hour SAS token for the source file
var sas = sourceBlob.GetSharedAccessSignature(new SharedAccessBlobPolicy() {
    Permissions = SharedAccessBlobPermissions.Read,
    SharedAccessStartTime=DateTimeOffset.Now.AddMinutes(-5),
    SharedAccessExpiryTime=DateTimeOffset.Now.AddHours(2)
});

// copy to the blob using the 
destBlob = destContainer.GetBlockBlobReference(dest);
var sourceUri = new Uri(sourceBlob.Uri + sas);
await destBlob.StartCopyAsync(sourceUri);

// copy may not be finished at this point, check on the status of the copy
while (destBlob.CopyState.Status == CopyStatus.Pending)
{
    await Task.Delay(1000);
    await destBlob.FetchAttributesAsync();
}

if (destBlob.CopyState.Status != CopyStatus.Success)
{
    throw new InvalidOperationException($"Copy failed: {destBlob.CopyState.Status}");
}

If you do need to cancel the copy for any reason, you can get hold of the CopyId from the blob's CopyState and pass that to the AbortCopyAsync method on the CloudBlockBlob.

Uploading local files

Obviously if you're copying blobs into blob storage from local files, then you can't use StartCopyAsync, but there is the convenient UploadFromFileAsync method that will upload from a local file.

await destBlob.UploadFromFileAsync("mylocalfile.zip");

Unfortunately, at the time of writing, the current version of the blob storage SDK (10.0.0.3) is susceptible to out of memory exceptions when uploading huge files. Hopefully that will get resolved soon.


0 Comments Posted in:

Last year I created a demo Azure Functions project that implemented a simple CRUD REST API for managing TODO list items.

The project showed off various Azure Functions bindings by implementing the REST API against four different backing stores:

  • In-memory
  • Azure Table Storage
  • Azure Blob Storage (a JSON file per TODO item)
  • Cosmos DB

You can read about how I implemented each of these in my original post here.

But the one backing store I never got round to implementing was Entity Framework Core. And that was because Azure Functions doesn't really offer any binding support for working with SQL databases.

However, with the recent release of dependency injection support for Azure Functions, working with EF Core in Azure Functions feels a bit nicer, so in this post, I'll explain how I updated my sample app to support an Azure SQL Database as a backing store.

Project references

The first step was to reference the EF Core SQL Server NuGet package. As Jeff Hollan explains in this very helpful article, it's important to select the correct version number that's compatible with the version of .NET you are running on.

I've also referenced the Microsoft.Azure.Functions.Extensions NuGet package, which gives us access to the dependency injection feature.

<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.2.3" />
<PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" />

Database schema and model

My database schema is very simple. Here's the SQL to create the table I used for testing. I didn't create an EF Core migration, but Jeff's article shows how to do that if you want.

CREATE TABLE [dbo].[Todos]
(
    [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY, 
    [TaskDescription] NVARCHAR(50) NOT NULL, 
    [IsCompleted] BIT NOT NULL, 
    [CreatedTime] DATETIME NOT NULL
)

I also updated my local.settings.json to include a connection string I could use for local testing purposes:

"SqlConnectionString": "Data Source=(LocalDB)\\MSSQLLocalDB;Integrated Security=true;Database=Todos"

And I created an EF Core model for my TODO item entity and a custom DbContext :

public class TodoEf
{
    public Guid Id { get; set; } = Guid.NewGuid();
    public DateTime CreatedTime { get; set; } = DateTime.UtcNow;
    public string TaskDescription { get; set; }
    public bool IsCompleted { get; set; }
}

public class TodoContext : DbContext
{
    public TodoContext(DbContextOptions<TodoContext> options)
        : base(options)
    { }

    public DbSet<TodoEf> Todos { get; set; }
}

Initialize dependency injection

To set up dependency injection for our function app we use the FunctionsStartup attribute on the assembly to indicate a startup class that will run when the function app starts. In that class, which inherits from FunctionsStartup we override the Configure method. This allows us to retrieve the SQL connection string from configuration, and register a DbContext in the services, which will allow us to inject the TodoContext into our function.

[assembly: FunctionsStartup(typeof(AzureFunctionsTodo.Startup))]

namespace AzureFunctionsTodo
{
    class Startup : FunctionsStartup
    {
        public override void Configure(IFunctionsHostBuilder builder)
        {
            string connectionString = Environment.GetEnvironmentVariable("SqlConnectionString");
            builder.Services.AddDbContext<TodoContext>(
                options => SqlServerDbContextOptionsExtensions.UseSqlServer(options, connectionString));
        }
    }
}

Injecting DbContext into a function

What dependency injection now offers to us is the ability for us to define our functions in classes which have their dependencies injected into their constructor. So here I have defined a TodoApiEntityFramework class whose constructor takes a TodoContext, that can be used by the functions we'll define shortly.

public class TodoApiEntityFramework
{
    private const string Route = "eftodo";
    private readonly TodoContext todoContext;

    public TodoApiEntityFramework(TodoContext todoContext)
    {
        this.todoContext = todoContext;
    }

    // ... functions defined here
}

Get all Todo items

Now we're ready to implement each of the five methods on our Todo api. The first gets all Todo items. As you can see it's very similar to a regular Azure Function definition, with the exception that it's not a static method. It uses a HttpTrigger, and simply uses the injected TodoContext to retrieve all the Todo items.

[FunctionName("EntityFramework_GetTodos")]
public async Task<IActionResult> GetTodos(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = Route)]
    HttpRequest req, ILogger log)
{
    log.LogInformation("Getting todo list items");
    var todos = await todoContext.Todos.ToListAsync();
    return new OkObjectResult(todos);
}

Get Todo item by id

Getting a Todo item by id again is also very straightforward to implement by calling FindAsync on our Todos DbSet:

[FunctionName("EntityFramework_GetTodoById")]
public async Task<IActionResult> GetTodoById(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = Route + "/{id}")]
    HttpRequest req, ILogger log, string id)
{
    log.LogInformation("Getting todo item by id");
    var todo = await todoContext.Todos.FindAsync(Guid.Parse(id));
    if (todo == null)
    {
        log.LogInformation($"Item {id} not found");
        return new NotFoundResult();
    }
    return new OkObjectResult(todo);
}

Create a Todo item

Here's my function to create a new Todo item. We just need to add the new item to our DbSet and save changes.

[FunctionName("EntityFramework_CreateTodo")]
public async Task<IActionResult>CreateTodo(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = Route)]
    HttpRequest req, ILogger log)
{
    log.LogInformation("Creating a new todo list item");
    var requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var input = JsonConvert.DeserializeObject<TodoCreateModel>(requestBody);
    var todo = new TodoEf { TaskDescription = input.TaskDescription };
    await todoContext.Todos.AddAsync(todo);
    await todoContext.SaveChangesAsync();
    return new OkObjectResult(todo);
}

Update Todo item

Our update function is a little more involved as you can optionally update the task description:

[FunctionName("EntityFramework_UpdateTodo")]
public async Task<IActionResult> UpdateTodo(
    [HttpTrigger(AuthorizationLevel.Anonymous, "put", Route = Route + "/{id}")]
    HttpRequest req, ILogger log, string id)
{
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var updated = JsonConvert.DeserializeObject<TodoUpdateModel>(requestBody);
    var todo = await todoContext.Todos.FindAsync(Guid.Parse(id));
    if (todo == null)
    {
        log.LogWarning($"Item {id} not found");
        return new NotFoundResult();
    }

    todo.IsCompleted = updated.IsCompleted;
    if (!string.IsNullOrEmpty(updated.TaskDescription))
    {
        todo.TaskDescription = updated.TaskDescription;
    }

    await todoContext.SaveChangesAsync();

    return new OkObjectResult(todo);
}

Delete Todo item

And here's how we can delete a Todo item.

[FunctionName("EntityFramework_DeleteTodo")]
public async Task<IActionResult> DeleteTodo(
    [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = Route + "/{id}")]
    HttpRequest req, ILogger log, string id)
{
    var todo = await todoContext.Todos.FindAsync(Guid.Parse(id));
    if (todo == null)
    {
        log.LogWarning($"Item {id} not found");
        return new NotFoundResult();
    }

    todoContext.Todos.Remove(todo);
    await todoContext.SaveChangesAsync();
    return new OkResult();
}

Summary

The new dependency injection feature of Azure Functions makes it very simple to work with Entity Framework Core database contexts within an Azure Functions app, even though there is no explicit EF Core binding for Azure Functions. I'm also hopeful that when EF 6.3 becomes available, it will make it much easier to port legacy EF 6 code to run in Azure Functions.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions

0 Comments Posted in:

Suppose you have an application that consists of a "front-end" website, and a back-end web API. You want the end users of your application to be able to access the front-end website, but you only want the front end to be accessible to your end users. The back-end should be locked down so it is only callable from the front-end.

Front-end

This is a fairly standard architecture, and is quite easy to achieve in Azure with traditional VMs in a Virtual Network, or with AKS. You simply only expose public endpoints for the front-end services you want to make available.

Unfortunately, if you're using App Service, with your front-end and back-end services hosted as Web Apps, there hasn't been an easy way to do this until recently. That's because there is no way for a Web App hosted on App Service to join a Virtual Network, unless you choose the "ASE" (App Service Environment) pricing tier, which is prohibitively expensive for many scenarios.

Restricting access to Web Apps

Now of course, you can (and should) protect your back-end API by requiring all callers to provide credentials, and encrypt the traffic with TLS. But we'd like to go further than that. Ideally, even if credentials to call the back-end API were leaked, they still shouldn't be usable by any attacker on the internet - we'd like to accept only traffic originating from trusted locations.

App Service does help us out a bit here. As well as securing our endpoints in the usual way, we can add Access Restrictions to our Web Apps. This way we can whitelist the IP addresses that are allowed to make incoming requests.

This is great for your front-end web app if you happen to know the exact IP addresses that your customer will use. That way you can prevent all and sundry from visiting your site and trying to brute force the login screen.

But if the back-end API is intended to to be called only by the front-end web app, then we have to whitelist all possible outbound IP addresses for the App Service plan that the front-end web app is hosted on. And those IP addresses are not exclusive to our front-end web app. It would be possible for other apps running on App Service in the same data center to attempt to access our back-end API.

Service Endpoints and new VNet integration to the rescue!

The good news is that by using a mixture of "Service Endpoints" alongside the new Virtual Network integration feature for App Service, you can completely lock down access to your back-end APIs, and only allow incoming traffic from the web apps that should be calling them.

Essentially we can force all outbound requests to the back-end service from the front-end service to flow through a Virtual Network. And then we can configure the back-end service to only accept traffic from that Virtual Network.

image

And how do we make the outbound requests go through the Virtual Network, and not take some other route? That's what the "Service Endpoints" do. Essentially they set up a custom route that all traffic for a particular Azure service (in our case App Service), will follow.

Demo overview

In the rest of this post, I'll show the steps to set this all up. Here's a quick overview of the key steps:

  1. Create a Virtual Network
  2. Create a delegated subnet, and enable a service endpoint for App Service
  3. Create an App Service Plan and front-end and back-end web apps
  4. Add an access restriction for the back-end web app to only allow traffic from the subnet
  5. Connect the front-end web app to the vnet
  6. (optional) connect the back-end web app to the vnet

Step 1 - create the virtual network

First we need to create a Virtual Network. I'll create a resource group to put everything in:

$resourceGroup = "VNetTest"
$location = "westeurope"
az group create -n $resourceGroup -l $location

And create a new VNet:

$vnetName = "vnettest"
$vnetAddressPrefix = "10.2.0.0/16"
az network vnet create -n $vnetName `
    -g $resourceGroup `
    --address-prefix $vnetAddressPrefix

Step 2 - Create a delegated subnet

We need to create a delegated subnet in our VNet that all traffic from our front-end web app will travel through. The key things here are to add a delegation of Microsoft.Web/serverFarms and enable the service endpoint of Microsoft.Web. A prefix of /27 is allows sufficient addresses for this subnet.

$subnetName = "delegatingsubnet"
az network vnet subnet create `
    -g $resourceGroup `
    --vnet-name $vnetName `
    -n $subnetName `
    --delegations "Microsoft.Web/serverFarms" `
    --address-prefixes "10.2.1.0/27" `
    --service-endpoints "Microsoft.Web"

Step 3 - Create the app service plan and web-apps

Now we'll create an App Service plan (which needs to be a "standard" plan or above) and two web apps - one for the front-end and one for the back-end.

$appServicePlanName = "TestAppServicePlan"
az appservice plan create -n $appServicePlanName `
        -g $resourceGroup --sku S1 

$frontendAppName = "frontend-01a"
az webapp create -n $frontendAppName -g $resourceGroup -p $appServicePlanName
$backendAppName = "backend-01a"
az webapp create -n $backendAppName -g $resourceGroup -p $appServicePlanName

I also uploaded a couple of simple test apps to the web apps, so that I could test connectivity from the front-end to the back-end.

Step 4 - Lock down the back-end

Now, let's block any incoming traffic to the back-end web app that doesn't come from the VNet.

It's a little bit fiddly to automate with the Azure CLI from PowerShell as getting your JSON correctly escaped can be tricky, but essentially we're just adding a new "IP security restriction" that points at the resource ID of our delegated subnet.

$subscriptionId = az account show --query id -o tsv
$subnetId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnetName/subnets/$subnetName"
$restrictions =  "{ \""ipSecurityRestrictions\"": [ { \""action\"": \""Allow\"", \""vnetSubnetResourceId\"": " +
                    "\""$subnetId\"", \""name\"": \""LockdownToVNet\"", \""priority\"": 100, \""tag\"": \""Default\"" } ] }"

az webapp config set -g $resourceGroup -n $backendAppName --generic-configurations $restrictions

If all works correctly, it will be impossible to make a call to the back-end web app from the public internet, and the front-end web app will also be denied access.

Step 6 - Connect front-end to the VNET

Now we need to connect the front-end web app to the VNet. This is so that any outgoing traffic from the front-end web app will get routed through the delegated subnet and therefore be allowed to access the back-end.

This is unfortunately not a feature that is supported by the Azure CLI, and I found the documentation on how to call the REST API directly very difficult to follow. In the end I used the F12 tools in my browser to see what the Azure Portal does when you join a web app to the VNet. This revealed the endpoint I needed to call (/config/virtualNetwork) and the format of the payload (including the swiftSupported flag).

I then created the following PowerShell function to connect a web app to a specific subnet in a VNet:

function Join-Vnet ($resourceGroup, $webAppName, $vnetName, $subnetName)
{
    $subscriptionId = az account show --query id -o tsv
    $location = az group show -n $resourceGroup --query location -o tsv
    $subnetId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnetName/subnets/$subnetName"

    $resourceId = "/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Web/sites/$webAppName/config/virtualNetwork"
    $url = "https://management.azure.com$resourceId" + "?api-version=2018-02-01"

    $payload = @{ id=$resourceId; location=$location;  [email protected]{subnetResourceId=$subnetId; swiftSupported="true"} } | ConvertTo-Json
    $accessToken = az account get-access-token --query accessToken -o tsv
    $response = Invoke-RestMethod -Method Put -Uri $url -Headers @{ Authorization="Bearer $accessToken"; "Content-Type"="application/json" } -Body $payload
}

And we can use it to join the front-end app to the VNet:

Join-Vnet $resourceGroup $frontendAppName $vnetName $subnetName

With that done, our front-end should be able to communicate again with the back-end. I've noticed that the change isn't always immediate - sometimes it can take a minute or two before you are able to access.

Step 7 - (Optional) Connect Back-end to VNet

If you have multiple back-end services, they also may need to communicate with each other. That can be achieved by connecting the back end web-app to the VNet as well. We can reuse the function we just created:

Join-Vnet $resourceGroup $backendAppName $vnetName $subnetName

Summary

In this post we saw how to secure the back-end tier of a multi-tier web app, by making use of some new App Service features, and without having to use the expensive App Service Environment features. In fact, the costs for setting this up are minimal, and represents true defence in-depth for your back-end services.

Obviously, we've not talked about restricting access to the front-end. You might also want to lock that down to a VNet as well depending on your use case. That can be achieved by creating an Application Gateway and only allow incoming traffic to the front-end to come from that. I've also managed to set that up for an App Service, but I'll save that for another post, because it was quite a complex process.

Want to learn more about the Azure CLI? Be sure to check out my Pluralsight course Azure CLI: Getting Started.