Uploading Blobs with the V12 Storage SDK
A while back Microsoft switched from the Microsoft.Azure.Storage.Blob SDK (known as the V11 SDK) to Azure.Storage.Blobs (known as V12).
I have been working on updating lots of code from the V11 SDK to V12. There is a helpful migration guide which is worth consulting as there are a lot of changes.
In this post, I want to share some code snippets for uploading blobs as this is something we do a lot. There are several scenarios that I needed to cover:
- Uploading a readable stream into blob storage (e.g. an in-memory stream)
- Uploading a local file on disk in to blob storage
- Creating a blob by writing to a stream
- Uploading to a blob using a writable blob SAS
- Uploading to a blob using a writable container SAS
- Creating a blob by separately uploading blocks
I'll also show how we can set blob metadata and tags, as well as set the content-type
header to hold the MIME type.
Setup
First, let me show you the code to get connected to our storage account, and get hold of a BlobContainerClient
for the container we'll use for testing uploads.
var connectionString = "your-connection-string-here";
var blobServiceClient = new BlobServiceClient(connectionString);
var containerName = "mheathtest";
var containerClient = blobServiceClient.GetBlobContainerClient(containerName);
await containerClient.CreateIfNotExistsAsync();
// some example metadata and tags we will apply to each upload
var metadata = new Dictionary<string, string> { { "UploadedBy", "Mark" }};
var tags = new Dictionary<string, string> { { "HomePage", "markheath.net" }};
Example 1 - Uploading from a readable stream
In this first example we'll upload a blob using a readable stream. This of course could be a stream to a local file, but it could also be a MemoryStream
as shown here.
Notice I'm using BlobUploadOptions
to set the metadata, tags and content type header for this blob. You can also use this technique to choose the initial AccessTier
if you need (e.g. hot, cold or archive).
var blob1Client = containerClient.GetBlobClient("blob1.txt");
var options = new BlobUploadOptions() {
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "text/plain" }
};
using (var ms = new MemoryStream(Encoding.UTF8.GetBytes("Hello World")))
{
await blob1Client.UploadAsync(ms,options);
}
Example 2 - Uploading a local file
Although the example I showed above can easily be used to upload a local file (e.g. by passing in the stream created by File.OpenRead
), there is an overload where you can simply pass in the path to a local file.
In this example, I'm also setting the MaximumConcurrency
which can be used to get faster uploads by allowing the file to be uploaded in parallel. I recommend experimenting a bit with this to see what the optimal setting for your application is. I've found that this can make a significant difference to upload times.
var blob2Client = containerClient.GetBlobClient("blob2.zip");
var options2 = new BlobUploadOptions()
{
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "application/zip" },
TransferOptions = new StorageTransferOptions() { MaximumConcurrency = 4 }
};
var localFile = @"C:\Users\mheath\Downloads\Example.zip";
await blob2Client.UploadAsync(localFile, options2);
Example 3 - Upload by writing to a stream
There are some situations in which the contents of a blob aren't available as a readable stream. A good example of this is where you are wanting to create a zip file, which often has a push model, where you write the different files into the zip stream.
To achieve this we can use the OpenWriteAsync
method on the BlockBlobClient
(note that the regular BlobClient
does not support this), which we use GetBlockBlobClient
to get hold of. Also note that although OpenWriteAsync
takes a boolean overwrite
parameter, you have to set this to true.
var blob3Client = containerClient.GetBlockBlobClient("blob3.txt");
var options3 = new BlockBlobOpenWriteOptions()
{
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "text/plain" }
};
using (var writableStream = await blob3Client.OpenWriteAsync(true, options3))
{
var data = Encoding.UTF8.GetBytes("Hello World");
await writableStream.WriteAsync(data, 0, data.Length);
}
Example 4 - Uploading using a writable blob SAS
Sometimes we may not have full credentials to access the target storage account, but we do have a writable SAS that let's us write to a specific blob.
Let's first see the code that creates the writable SAS Uri (which does require the connection string to the storage account). Here I'm creating a 12 hour token that lets you read and write to a blob as well as set its tags.
var blobSas = containerClient.GetBlobClient("blob4.txt")
.GenerateSasUri(BlobSasPermissions.Read |
BlobSasPermissions.Write |
BlobSasPermissions.Tag,
DateTimeOffset.Now.AddHours(1));
Now we can use that SAS Uri, by constructing a BlobClient
from the SAS Uri. And now we can use it just like we saw in the earlier example of uploading from a Stream
.
var blob4Client = new BlobClient(blobSas);
var options4 = new BlobUploadOptions()
{
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "text/plain" }
};
using (var ms = new MemoryStream(Encoding.UTF8.GetBytes("Hello World")))
{
await blob4Client.UploadAsync(ms, options4);
}
Example 5 - Uploading using a writable container SAS
This example is almost identical to the previous, but is for a situation where we have a SAS Uri that let's us write to an entire container.
Here's how that SAS Uri might be generated:
var containerSas = containerClient.GenerateSasUri(
BlobContainerSasPermissions.Read |
BlobContainerSasPermissions.Write |
BlobContainerSasPermissions.Tag, DateTimeOffset.Now.AddHours(1));
To get the BlobClient
for uploading to this container SAS requires the extra step of constructing a new BlobContainerClient
from the SAS, and then getting a blob client:
var blob5Client = new BlobContainerClient(containerSas)
.GetBlobClient("blob5.txt");
var options5 = new BlobUploadOptions()
{
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "text/plain" }
};
using (var ms = new MemoryStream(Encoding.UTF8.GetBytes("Hello World")))
{
await blob5Client.UploadAsync(ms, options5);
}
Example 6 - Building a blob by appending blocks
This final example is the most complex but also very powerful. Suppose the data for your file arrives in chunks that you need to append as they are received. If you can't use the writable stream approach described above (maybe because you can't maintain state between handling those chunks), then you can take an approach where you create a "block blob" and for each block of data you receive you "stage" a block. The overall blob consists of each of the blocks appended together.
There are some gotchas here. First of all, if you created the "first" block using one of the techniques I showed above, that blob, despite being a "block blob" will not have any blocks (see this GitHub issue for why). So make sure all parts of the file are uploaded as blocks with StageBlockAsync
.
Second, whilst its possible to call CommitBlockListAsync
every time you get a new block, you will need to provide the metadata and tags you want to be associated with the block or they will be lost. So if possible, leave all the blocks as "uncommitted" and only call CommitBlockListAsync
once you have all the blocks staged. Essentially this means that the blob will only "appear" in blob storage once the whole thing is uploaded.
You have to provide a name for each block, which needs to be base 64 encoded. You can just use GUIDs for this, but you may also prefer to logically number the blocks like I've done so it's obvious what order they are supposed to be staged in. That's particularly important if you want to upload the blocks in parallel for performance reasons, so they might not necessarily be received in order.
Here's my example, where I'm showing staging four blocks and then constructing the final blob out of them.
var blob6Client = containerClient.GetBlockBlobClient("blob6.txt");
var blockIds = new List<string>();
// simulating four blocks
foreach (var n in Enumerable.Range(1, 4))
{
// we'll just use a block id based on the block number, but can be a guid
string newId = Convert.ToBase64String(BitConverter.GetBytes(n));
blockIds.Add(newId);
var blockContent = Encoding.UTF8.GetBytes($"Appending block {newId} ({blockIds.Count})\r\n");
using (var ms = new MemoryStream(blockContent));
{
// stage the block
await blob6Client.StageBlockAsync(newId, ms);
}
}
// commit the block list to construct the whole blob
var opts = new CommitBlockListOptions()
{
Metadata = metadata,
Tags = tags,
HttpHeaders = new BlobHttpHeaders() { ContentType = "text/plain" }
};
await blob6Client.CommitBlockListAsync(blockIds, opts);
Hopefully that covers the majority of cases where you need to upload blobs to Azure Storage.
Comments
I experienced a lot of invalid header issues when trying to use this version of the library with Azurite local storage.
Phil Ritchiethat's frustrating. I'm mostly talking to real storage accounts, and when I do local dev still sometimes use the old emulator, so haven't run into those issues yet. Hopefully it will get resolved soon
Mark Heath