Introduction
One of the most common integration mistakes I see on enterprise projects is connecting Dynamics 365 directly to external systems via synchronous HTTP calls from plugins. It works well in demos. In production, it fails in ways that are hard to diagnose and harder to recover from: the external system is slow and your plugin times out; the external system is down and your users cannot save records; the external system throttles and your data falls out of sync.
Azure Service Bus solves these problems. Instead of calling the external system directly, your Dynamics 365 plugin posts a message to a queue. The external system reads that queue at its own pace. If the external system is down, messages accumulate safely in the queue and are processed when it recovers. Neither side knows or cares about the availability of the other.
This article covers the full picture: configuring the built-in Dynamics 365 Service Bus integration, writing plugins that post to Service Bus, building .NET consumers, and handling the failure cases that inevitably arise in production.
Architecture Overview
There are two distinct ways to get Dynamics 365 events into Azure Service Bus:
Option 1: Built-In Service Endpoint (No Code)
Dynamics 365 has native Service Bus support via the Plugin Registration Tool. You register a Service Endpoint, attach it to any plugin step, and Dataverse serialises the plugin execution context as a message automatically. No plugin code required.
D365 Event → Plugin Step → Service Endpoint → Azure Service Bus Queue/Topic
↓
Consumer (Azure Function, .NET service, etc.)
Option 2: Custom Plugin (Full Control)
A regular C# plugin calls the Azure Service Bus SDK directly. You control exactly what goes in the message, its schema, and any enrichment from the record.
D365 Event → Custom Plugin → ServiceBusClient.SendMessageAsync() → Queue/Topic
↓
Consumer
Option 1 is faster to set up; Option 2 gives you control over message shape and is required when you need to enrich the payload with data from related records. This article covers both.
Setting Up Azure Service Bus
Step 1: Create the Namespace and Queue
# Azure CLI
az servicebus namespace create \
--resource-group rg-integrations \
--name sb-anielak-prod \
--sku Standard \
--location uksouth
az servicebus queue create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--name d365-account-events \
--max-delivery-count 5 \
--lock-duration PT5M \
--default-message-time-to-live P14D
Key settings to understand:
- max-delivery-count: After 5 failed delivery attempts, the message moves to the dead-letter queue rather than being retried indefinitely
- lock-duration: A consumer has 5 minutes to process and complete a message before it becomes visible to other consumers again
- default-message-time-to-live: Messages expire after 14 days if not processed, preventing queue bloat from old unprocessable messages
Step 2: Create a Shared Access Policy
# Policy for D365 (send only)
az servicebus namespace authorization-rule create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--name D365Sender \
--rights Send
# Policy for consumer service (listen only)
az servicebus namespace authorization-rule create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--name ConsumerListener \
--rights Listen
Separate send and listen policies follow the principle of least privilege. If the D365 sender credentials are compromised, the attacker cannot read messages out of the queue.
Option 1: Built-In Service Endpoint Integration
Register a Service Endpoint
- Open the Plugin Registration Tool and connect to your environment
- Click Register → Register New Service Endpoint
- Paste the Service Bus connection string (with Send rights)
- Select your queue or topic from the list
- Set Designation to
Queue - Set Message Format to
JSON - Click Save
Register a Step on the Service Endpoint
- Select your new Service Endpoint and click Register New Step
- Configure the step as you would any plugin step:
Message: Update
Primary Entity: account
Stage: PostOperation
Execution Mode: Asynchronous (critical—keeps D365 UI responsive)
Filtering: name, revenue, statecode (only send when these fields change)
With this in place, every time an Account record is updated and one of those three fields changes, Dynamics 365 will post the full plugin execution context as a JSON message to your queue. No plugin code required.
What the Message Looks Like
{
"BusinessUnitId": "...",
"CorrelationId": "...",
"InitiatingUserId": "...",
"MessageName": "Update",
"Mode": 1,
"OrganizationId": "...",
"OrganizationName": "YourOrgName",
"PrimaryEntityId": "a1b2c3d4-...",
"PrimaryEntityName": "account",
"Stage": 40,
"UserId": "...",
"InputParameters": [
{
"key": "Target",
"value": {
"__type": "Entity",
"Attributes": [
{ "key": "name", "value": "Acme Corp Ltd" },
{ "key": "revenue", "value": 5000000 }
],
"EntityState": null,
"Id": "a1b2c3d4-...",
"LogicalName": "account"
}
}
],
"PostEntityImages": [...]
}
Option 2: Custom Plugin Integration
When you need control over the message schema—perhaps to enrich it with related record data, or to produce a canonical event format shared across multiple systems—write a plugin that sends to Service Bus directly.
Project Setup
// NuGet packages required (use versions compatible with .NET Framework 4.6.2)
// Note: Use Azure.Messaging.ServiceBus 7.x for modern AMQP-based client
// Install-Package Azure.Messaging.ServiceBus -Version 7.18.0
// Install-Package Microsoft.CrmSdk.CoreAssemblies
// Install-Package Newtonsoft.Json
Define the Event Schema
// Canonical event shape shared across all consuming systems
public class AccountChangedEvent
{
public string EventType { get; set; } // "account.updated"
public Guid AccountId { get; set; }
public string AccountName { get; set; }
public string PrimaryEmail { get; set; }
public decimal? AnnualRevenue { get; set; }
public string ChangedBy { get; set; }
public DateTime ChangedAt { get; set; }
public string[] ChangedFields { get; set; }
public string CorrelationId { get; set; } // For distributed tracing
}
Plugin Implementation
using System;
using System.Text;
using Azure.Messaging.ServiceBus;
using Microsoft.Xrm.Sdk;
using Newtonsoft.Json;
namespace Anielak.D365.Plugins
{
public class AccountChangedPublisher : IPlugin
{
// Injected via secure configuration (encrypted in plugin step registration)
private readonly string _serviceBusConnectionString;
private readonly string _queueName;
public AccountChangedPublisher(string unsecureConfig, string secureConfig)
{
if (string.IsNullOrWhiteSpace(secureConfig))
throw new InvalidPluginExecutionException(
"Secure configuration (Service Bus connection string) is missing.");
// Secure config contains the connection string, unsecure config has queue name
_serviceBusConnectionString = secureConfig.Trim();
_queueName = string.IsNullOrWhiteSpace(unsecureConfig)
? "d365-account-events"
: unsecureConfig.Trim();
}
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider
.GetService(typeof(IPluginExecutionContext));
var tracingService = (ITracingService)serviceProvider
.GetService(typeof(ITracingService));
var serviceFactory = (IOrganizationServiceFactory)serviceProvider
.GetService(typeof(IOrganizationServiceFactory));
// Only process Account updates in PostOperation
if (context.MessageName != "Update" ||
context.PrimaryEntityName != "account" ||
context.Stage != 40)
return;
tracingService.Trace($"AccountChangedPublisher: Processing account {context.PrimaryEntityId}");
var target = context.InputParameters["Target"] as Entity;
if (target == null) return;
// Retrieve the full account for enrichment
var service = serviceFactory.CreateOrganizationService(context.UserId);
var account = service.Retrieve("account", context.PrimaryEntityId,
new Microsoft.Xrm.Sdk.Query.ColumnSet(
"name", "emailaddress1", "revenue", "ownerid"));
// Build the canonical event
var changedFields = new System.Collections.Generic.List<string>();
foreach (var attr in target.Attributes.Keys)
changedFields.Add(attr);
var accountEvent = new AccountChangedEvent
{
EventType = "account.updated",
AccountId = context.PrimaryEntityId,
AccountName = account.GetAttributeValue<string>("name"),
PrimaryEmail = account.GetAttributeValue<string>("emailaddress1"),
AnnualRevenue = account.GetAttributeValue<Money>("revenue")?.Value,
ChangedBy = context.InitiatingUserId.ToString(),
ChangedAt = DateTime.UtcNow,
ChangedFields = changedFields.ToArray(),
CorrelationId = context.CorrelationId.ToString()
};
// Send to Service Bus
PublishEvent(accountEvent, tracingService);
tracingService.Trace("AccountChangedPublisher: Message published successfully");
}
private void PublishEvent(AccountChangedEvent accountEvent, ITracingService tracingService)
{
var json = JsonConvert.SerializeObject(accountEvent);
var messageBody = Encoding.UTF8.GetBytes(json);
// Azure.Messaging.ServiceBus client (AMQP-based, recommended)
var client = new ServiceBusClient(_serviceBusConnectionString);
var sender = client.CreateSender(_queueName);
try
{
var message = new ServiceBusMessage(messageBody)
{
ContentType = "application/json",
Subject = accountEvent.EventType,
MessageId = accountEvent.CorrelationId,
CorrelationId = accountEvent.CorrelationId
};
// Synchronous send (plugins cannot use async/await)
sender.SendMessageAsync(message).GetAwaiter().GetResult();
tracingService.Trace($"Message sent. MessageId: {message.MessageId}");
}
finally
{
sender.DisposeAsync().GetAwaiter().GetResult();
client.DisposeAsync().GetAwaiter().GetResult();
}
}
}
}
Registering the Plugin Step
Message: Update
Primary Entity: account
Stage: PostOperation
Execution Mode: Asynchronous (required for Service Bus calls)
Filtering: name, emailaddress1, revenue, statecode
Secure Config: Endpoint=sb://sb-anielak-prod.servicebus.windows.net/;SharedAccessKeyName=D365Sender;SharedAccessKey=...
Unsecure Config: d365-account-events
The Secure Config field in Plugin Registration Tool is encrypted at rest and is the correct place to store connection strings. Never put connection strings in Unsecure Config or in the plugin code itself.
Building the Consumer
The consumer is a .NET background service (or Azure Function) that continuously reads messages from the queue and processes them.
Azure Function Consumer
using System;
using Azure.Messaging.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
public class AccountEventConsumer
{
private readonly IExternalSystemClient _externalClient;
public AccountEventConsumer(IExternalSystemClient externalClient)
{
_externalClient = externalClient;
}
[FunctionName("ProcessAccountEvents")]
public async Task Run(
[ServiceBusTrigger(
"d365-account-events",
Connection = "ServiceBusConnection")] ServiceBusReceivedMessage message,
ILogger log)
{
log.LogInformation(
"Processing message {MessageId}, CorrelationId: {CorrelationId}",
message.MessageId,
message.CorrelationId);
AccountChangedEvent accountEvent;
try
{
var json = message.Body.ToString();
accountEvent = JsonConvert.DeserializeObject<AccountChangedEvent>(json);
}
catch (JsonException ex)
{
// Poison message - cannot deserialise, log and let it dead-letter
log.LogError(ex, "Failed to deserialise message {MessageId}. Abandoning.",
message.MessageId);
throw; // Azure Functions will handle dead-lettering after max delivery attempts
}
try
{
await _externalClient.SyncAccountAsync(accountEvent);
log.LogInformation(
"Account {AccountId} synced successfully. Event: {EventType}",
accountEvent.AccountId,
accountEvent.EventType);
}
catch (ExternalSystemTransientException ex)
{
// Transient error - abandon message so it becomes visible again for retry
log.LogWarning(ex,
"Transient error syncing account {AccountId}. Message will be retried.",
accountEvent.AccountId);
throw; // Triggers retry
}
catch (ExternalSystemPermanentException ex)
{
// Permanent error - dead-letter manually with a reason
log.LogError(ex,
"Permanent error syncing account {AccountId}. Dead-lettering message.",
accountEvent.AccountId);
// Do not throw - message will be completed by the Functions runtime,
// so we need to handle dead-lettering ourselves here if needed.
// In practice, throw ApplicationException to trigger the dead-letter path.
throw new ApplicationException($"Permanent failure: {ex.Message}", ex);
}
}
}
Handling the Dead-Letter Queue
Messages that fail delivery after max-delivery-count attempts land in the
dead-letter queue (DLQ). This is not a failure state to ignore—it is a queue of records
that your external system has not received.
Monitor the DLQ
// Alert if DLQ depth exceeds threshold
az monitor metrics alert create \
--name "ServiceBus-DLQ-Alert" \
--resource-group rg-integrations \
--scopes /subscriptions/{sub}/resourceGroups/rg-integrations/providers/Microsoft.ServiceBus/namespaces/sb-anielak-prod \
--condition "avg DeadLetteredMessages > 0" \
--window-size 5m \
--evaluation-frequency 1m \
--action /subscriptions/{sub}/resourceGroups/rg-integrations/providers/Microsoft.Insights/actionGroups/AlertTeam
Process DLQ Messages
// Periodically read and reprocess dead-lettered messages
public async Task ReprocessDeadLetterAsync(string connectionString, string queueName)
{
var client = new ServiceBusClient(connectionString);
var receiver = client.CreateReceiver(
queueName,
new ServiceBusReceiverOptions
{
SubQueue = SubQueue.DeadLetter,
ReceiveMode = ServiceBusReceiveMode.PeekLock
});
var messages = await receiver.ReceiveMessagesAsync(maxMessages: 10);
foreach (var message in messages)
{
_logger.LogInformation(
"DLQ message {MessageId}: DeadLetterReason={Reason}",
message.MessageId,
message.DeadLetterReason);
try
{
// Attempt reprocessing
await ProcessMessageAsync(message);
await receiver.CompleteMessageAsync(message);
}
catch (Exception ex)
{
_logger.LogError(ex, "Reprocessing failed for {MessageId}", message.MessageId);
await receiver.AbandonMessageAsync(message);
}
}
}
Idempotency: The Non-Negotiable Requirement
Service Bus guarantees at-least-once delivery. In failure scenarios (consumer crashes mid-processing, network drop during completion), the same message may be delivered more than once. Your consumer must be idempotent.
public async Task SyncAccountAsync(AccountChangedEvent accountEvent)
{
// Use the CorrelationId as an idempotency key
var idempotencyKey = $"account-sync-{accountEvent.CorrelationId}";
if (await _idempotencyStore.HasBeenProcessedAsync(idempotencyKey))
{
_logger.LogInformation(
"Message {Key} already processed. Skipping duplicate.",
idempotencyKey);
return;
}
// Perform the actual sync
await _externalSystem.UpsertAccountAsync(new ExternalAccount
{
Id = accountEvent.AccountId.ToString(),
Name = accountEvent.AccountName,
Email = accountEvent.PrimaryEmail,
Revenue = accountEvent.AnnualRevenue
});
// Record that we processed this message
await _idempotencyStore.MarkAsProcessedAsync(idempotencyKey, TimeSpan.FromDays(7));
}
The idempotency store can be Redis, Azure Table Storage, or even a simple database table. The key insight is: if you have already processed a message with this correlation ID, return success without doing the work again.
Observability
Key Metrics to Monitor
- Active Message Count: Queue depth—rising numbers indicate consumer lag
- Dead-Lettered Message Count: Any value above 0 needs investigation
- Message Processing Duration: Track how long your consumer takes per message
- Throttled Requests: Indicates you need to scale up the namespace
- Incoming/Outgoing Messages Rate: Baseline for anomaly detection
Structured Logging in the Consumer
// Use correlation ID from the message for distributed tracing
using (_logger.BeginScope(new Dictionary<string, object>
{
["CorrelationId"] = message.CorrelationId,
["MessageId"] = message.MessageId,
["AccountId"] = accountEvent.AccountId
}))
{
_logger.LogInformation("Processing account event");
await SyncAccountAsync(accountEvent);
_logger.LogInformation("Account event processed successfully");
}
When to Use Topics Instead of Queues
Queues are point-to-point: one sender, one consumer group. If multiple independent systems need to react to the same D365 events, use Service Bus Topics with Subscriptions:
# Create a topic instead of a queue
az servicebus topic create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--name d365-account-events
# Each consuming system gets its own subscription
az servicebus topic subscription create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--topic-name d365-account-events \
--name erp-system
az servicebus topic subscription create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--topic-name d365-account-events \
--name marketing-platform
az servicebus topic subscription create \
--resource-group rg-integrations \
--namespace-name sb-anielak-prod \
--topic-name d365-account-events \
--name data-warehouse
Now every Account event is delivered to all three subscriptions independently. Each system consumes at its own pace. Failures in one consumer do not affect the others.
Best Practices Summary
- Always use Asynchronous execution mode. Synchronous Service Bus calls in plugins block the D365 UI thread and time out under load.
- Store connection strings in Secure Config. Never in code, configuration files, or Unsecure Config.
- Design consumers to be idempotent. At-least-once delivery means duplicates will happen.
- Set max delivery count deliberately. 5 is a reasonable default; lower for operations that cannot be retried, higher for transient failure-prone targets.
- Monitor the dead-letter queue. It is your safety net and your signal that something needs attention.
- Use Topics when multiple consumers exist. Do not create multiple queues receiving identical data—that is what subscriptions are for.
- Include a correlation ID in every message. It is the thread that lets you trace a D365 event across your entire system landscape.
- Use the Standard tier minimum. The Basic tier does not support topics, subscriptions, sessions, or dead-lettering—you will need these features in any real integration.
Conclusion
Azure Service Bus is the right backbone for enterprise-grade Dynamics 365 integrations. Whether you use the built-in Service Endpoint for quick wins or a custom plugin for full control over message shape, the result is an architecture where Dynamics 365 and your external systems are genuinely independent of each other.
The investment in setting up proper message queuing pays off the first time an external system goes down for maintenance and your D365 users do not notice. Messages queue up, the system recovers, and everything catches up automatically—without any manual intervention or data reconciliation work.
That reliability, at scale, is what separates a production integration from a prototype.