Migrating to the New Azure Service Bus SDK
Just over a year ago, a new .NET SDK for Azure Service Bus was released. This replaces the old WindowsAzure.ServiceBus NuGet package with the Microsoft.Azure.ServiceBus NuGet package.
You're not forced to change over to the new SDK if you don't want to. The old one still works just fine, and even continues to get updates. However, there are some benefits to switching over, so in this post I'll highlight the key differences and some potential gotchas to take into account if you do want to make the switch.
Benefits of the new SDK
First of all, why did we even need a new SDK? Well, the old one supported .NET 4.6 only, while the new one is .NET Standard 2.0 compatible, making it usable cross-platform in .NET core applications. It's also open source, available at https://github.com/Azure/azure-service-bus-dotnet, meaning you can easily examine the code, submit issues and pull requests.
It has a plugin architecture, supporting custom plugins for things like message compression or attachments. There are a few useful plugins already available. We encrypt all our messages with Azure Key Vault before sending them to Service Bus, so I'm looking forward to using the plugin architecture to simplify that code.
On top of that, the API has generally been cleaned up and improved, and its very much the future of the Azure Service Bus SDK.
Default transport type
One of the first gotchas I ran into was that there is a new default "transport type". The old SDK by default used what it called "NetMessaging", a proprietary Azure Service Bus protocol, even though the recommended option was the industry standard AMQP.
The new SDK however defaults to AMQP over port 5671. This was blocked by my work firewall, so I had to switch to the other option of AMQP over WebSockets which uses port 443. If you need to configure this option, append ;TransportType=AmqpWebSockets
to the end of your connection string.
One unfortunate side-effect of this switch from the NetMessaging
protocol to AMQP is the performance of batching. I blogged a while back about the dramatic speed improvements available by sending and receiving messages in batches. Whilst sending batches of messages with AMQP seems to have similar performance, when you attempt to receive batches, with AMQP you may get batches significantly smaller than the batch size you request, which slows things down considerably. The explanation for this is here, and the issue can be mitigated somewhat by setting the MessageReceiver.PrefetchCount
property to a suitably large value.
Here's some simple code you can use to check out the performance of batch sending/receiving with the new SQK. It also shows off the basic operation of the MessageSender
and MessageReceiver
classes in the new SDK, along with the ManagementClient
which allows us to create and delete queues.
string connectionString = // your connection string - remember to add ;TransportType=AmqpWebSockets if port 5671 is blocked
const string queueName = "MarkHeathTestQueue";
// PART 1 - CREATE THE QUEUE
var managementClient = new ManagementClient(connectionString);
if (await managementClient.QueueExistsAsync(queueName))
{
// ensure we start the test with an empty queue
await managementClient.DeleteQueueAsync(queueName);
}
await managementClient.CreateQueueAsync(queueName);
// PART 2 - SEND A BATCH OF MESSAGES
const int messages = 1000;
var stopwatch = new Stopwatch();
var client = new QueueClient(connectionString, queueName);
stopwatch.Start();
await client.SendAsync(Enumerable.Range(0, messages).Select(n =>
{
var body = $"Hello World, this is message {n}";
var message = new Message(Encoding.UTF8.GetBytes(body));
message.UserProperties["From"] = "Mark Heath";
return message;
}).ToList());
Console.WriteLine($"{stopwatch.ElapsedMilliseconds}ms to send {messages} messages");
stopwatch.Reset();
// PART 3 - RECEIVE MESSAGES
stopwatch.Start();
int received = 0;
var receiver = new MessageReceiver(connectionString, queueName);
receiver.PrefetchCount = 1000; // https://github.com/Azure/azure-service-bus-dotnet/issues/441
while (received < messages)
{
// unlike the old SDK which picked up the whole thing in 1 batch, this will typically pick up batches in the size range 50-200
var rx = (await receiver.ReceiveAsync(messages, TimeSpan.FromSeconds(5)))?.ToList();
Console.WriteLine($"Received a batch of {rx.Count}");
if (rx?.Count > 0)
{
// complete a batch of messages using their lock tokens
await receiver.CompleteAsync(rx.Select(m => m.SystemProperties.LockToken));
received += rx.Count;
}
}
Console.WriteLine($"{stopwatch.ElapsedMilliseconds}ms to receive {received} messages");
Management Client
Another change in the new SDK is that instead of the old NamespaceManager
, we have ManagementClient
. Many of the method names are the same or very similar so it isn't too hard to port code over.
One gotcha I ran into is that DeleteQueueAsync
(and the equivalent topic and subscription methods) now throw MessagingEntityNotFoundException
if you try to delete something that doesn't exist.
BrokeredMessage replaced by Message
The old SDK used a class called BrokeredMessage
to represent a message, whereas now it's just Message
.
It's had a bit of a reorganize, so things like DeliveryCount
and LockToken
are now found in Message.SystemProperties
. Custom message metadata is stored in UserProperties
instead of Properties
. Also, instead of providing the message body as a Stream
, it is now a byte[]
, which makes more sense.
Another significant change is that BrokeredMessage
used to have convenience methods like CompleteAsync
, AbandonAsync
, RenewLockAsync
and DeadLetterAsync
. You now need to make use of the ClientEntity
to perform these actions (with the exception of RenewLockAsync
to be discussed shortly).
ClientEntity changes
The new SDK retains the concept of a base ClientEntity
which has derived classes such as QueueClient
, TopicClient
, SubscriptionClient
etc. It's here that you'll find the CompleteAsync
, AbandonAsync
, and DeadLetterAsync
methods, but one conspicuous by its absence is RenewLockAsync
.
This means that if you're using QueueClient.RegisterMessageHandler
(previously called QueueClient.OnMessage
) or similar to handle messages, you don't have a way of renewing the lock for longer than the MaxAutoRenewDuration
duration specified in MessageHandlerOptions
(which used to be called OnMessageOptions.AutoRenewTimeout
). I know that is a little bit of an edge case, but we were relying on being able to call BrokeredMessage.RenewLockAsync
in a few places to extend the timeout further. With the new SDK, the ability to renew a lock is only available if you are using MessageReceiver
, which has a RenewLockAsync
method.
A few other minor changes that required a bit of code re-organization were the fact that old Close
methods are now CloseAsync
, meaning that it is trickier to use the Dispose
pattern. There is no longer a ClientEntity.Abort
method - presumably you now just call CloseAsync
to shut down the message handling pump. And when you create MessageHandlerOptions
you are required to provide an exception handler.
Summary
The new Azure Service Bus SDK offers lots of improvements over the old one, and the transition isn't too difficult, but there are a few gotchas to be aware of and I've highlighted some of the ones that I ran into. Hopefully this will be of use to you if you're planning to upgrade.
Comments
Really useful post ... Many thanks for sharing
Steve CulshawThe idea behind both queue and subscription clients was that those provide an abstraction for a message pump and you use it to achieve everything. Including lock extension configured using handler options. For scenarios where a maximum control is required over how messages are received, completed, lock extended, clients are no longer the constructs to use and MessageReceiver / MessageSender should be used instead.
Sean FeldmanFor those edge case scenarios where you want the out of the box pump, but need one off lock extension, internally clients have MessageReceiver. It's not exposed, but if you really need to, you could gain access to extend message lock. Though I'd really question if you're better off designing a longer lock duration or use a custom pump. Hope that helps.
Yes, our use case was a bit unconventional, and I did actually think about accessing the internal MessageReceiver, but my preference is to rework our code to not rely on this. I was mainly pointing it out as a change from the old SDK's behaviour. I may yet end up creating our own custom message pump, as a couple of other features I want are the ability to pause and resume the pump in certain scenarios, and to support round-robin reading from queues.
Mark Heaththanks for the great post, really useful. Question:
jwiseneryou say:
"A few other minor changes that required a bit of code re-organization were the fact that old Close methods are now CloseAsync, meaning that it is trickier to use the Dispose pattern."
So "what does trickier mean"? Do you have an example of how we properly close in a dispose situation?
Just that the IDisposable interface in .NET has a Dispose method returning void, so if you had Disposable classes that contained service bus resources, you'd now want to change them to have a DisposeAsync method instead.
Mark Heaththanks i'm facing issue using new sdk posted a question here in msdn forum pls help -
ashishhttps://social.msdn.microso...
Thank you for precious mapping from old names to new names
Mohsen Afshin