Cool iOS Azure App

Just something small this time, but I’ve downloaded and started using the Azure iOS app by Microsoft.

I’ve started using it to monitor Logic App runs, as well as other Azure resources.

At the moment, it is tied to only one login (not so good when you work on multiple clients on their AD accounts), but I’ll submit that as feedback for a future version.

Search for Azure in the App Store, download it and give it a try for yourself.

More output detail needed in the retries history in Logic App runs history

Some feedback for the Azure Logic Apps product team…

I have defined a custom retry policy on in my Logic App to retry four times at twenty second intervals. The retries history for an execution of a connector is shown in the “raw output” blade something like this..

"retryHistory": [
 {
 "startTime": "2017-05-10T18:36:30.5791892Z",
 "endTime": "2017-05-10T18:36:33.1109529Z",
 "code": "BadGateway",
 "clientRequestId": "c81917d5-964c-4230-bc39-516df18dc055",
 "serviceRequestId": "c81917d5-964c-4230-bc39-516df18dc055"
 },
 {
 "startTime": "2017-05-10T18:36:53.6442386Z",
 "endTime": "2017-05-10T18:36:56.3362655Z",
 "code": "BadGateway",
 "clientRequestId": "ddf44a88-d0df-4ec7-a177-40138ce8ea4d",
 "serviceRequestId": "ddf44a88-d0df-4ec7-a177-40138ce8ea4d"
 },
 {
 "startTime": "2017-05-10T18:37:16.5762731Z",
 "endTime": "2017-05-10T18:37:19.248291Z",
 "code": "BadGateway",
 "clientRequestId": "e19871e7-746a-41bd-a8f0-0e176138b39f",
 "serviceRequestId": "e19871e7-746a-41bd-a8f0-0e176138b39f"
 },
 {
 "startTime": "2017-05-10T18:37:39.6648436Z",
 "endTime": "2017-05-10T18:37:41.9461642Z",
 "code": "BadGateway",
 "clientRequestId": "58ae54b4-0d34-4296-afca-0c6e2896633c",
 "serviceRequestId": "58ae54b4-0d34-4296-afca-0c6e2896633c"
 }
 ]

In my particular case, the first call actually managed to create the record in the downstream system. However, an error was returned to the Logic App.

It would be nice if the retries history contained the error information for each retry as it does for the body of the output. For example, the retries history could look like this…

"retryHistory": [
 {
 "startTime": "2017-05-10T18:36:30.5791892Z",
 "endTime": "2017-05-10T18:36:33.1109529Z",
 "code": "BadGateway",
 "clientRequestId": "c81917d5-964c-4230-bc39-516df18dc055",
 "serviceRequestId": "c81917d5-964c-4230-bc39-516df18dc055",
 "status": 502, 
 "message": "{\r\n \"code\": \"\",\r\n \"message\": \"This is the 
error message\",\r\n \"innererror\": {\r\n \"message\": \"This is the
inner error message\",\r\n \"type\": \"
System.ServiceModel.FaultException`1
[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, 
Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]\",
\r\n \"stacktrace\": \" at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Create(Entity entity, CorrelationToken correlationToken, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Dictionary`2 optionalParameters)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataExecutionContext.Create(Entity entity)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataServiceDataProvider.CreateEdmEntity(CrmODataExecutionContext context, String edmEntityName, EdmEntityObject entityObject, Boolean isUpsert)\\r\\n at Microsoft.Crm.Extensibility.OData.EntityController.PostEntitySet(String entitySetName, EdmEntityObject entityObject)\\r\\n at lambda_method(Closure , Object , Object[] )\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<GetExecutor>b__9(Object instance, Object[] methodParameters)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.Execute(Object instance, Object[] arguments)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()\"\r\n }\r\n}", 
 "source": "organisation123.api.crm6.dynamics.com", 
 "errors": []
 },
 {
 "startTime": "2017-05-10T18:36:53.6442386Z",
 "endTime": "2017-05-10T18:36:56.3362655Z",
 "code": "BadGateway",
 "clientRequestId": "ddf44a88-d0df-4ec7-a177-40138ce8ea4d",
 "serviceRequestId": "ddf44a88-d0df-4ec7-a177-40138ce8ea4d",
 "status": 502, 
 "message": "{\r\n \"code\": \"\",\r\n \"message\": \"This is the 
error message\",\r\n \"innererror\": {\r\n \"message\": \"This is the
inner error message\",\r\n \"type\": \"
System.ServiceModel.FaultException`1
[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, 
Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]\",
\r\n \"stacktrace\": \" at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Create(Entity entity, CorrelationToken correlationToken, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Dictionary`2 optionalParameters)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataExecutionContext.Create(Entity entity)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataServiceDataProvider.CreateEdmEntity(CrmODataExecutionContext context, String edmEntityName, EdmEntityObject entityObject, Boolean isUpsert)\\r\\n at Microsoft.Crm.Extensibility.OData.EntityController.PostEntitySet(String entitySetName, EdmEntityObject entityObject)\\r\\n at lambda_method(Closure , Object , Object[] )\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<GetExecutor>b__9(Object instance, Object[] methodParameters)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.Execute(Object instance, Object[] arguments)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()\"\r\n }\r\n}", 
 "source": "organisation123.api.crm6.dynamics.com", 
 "errors": []
 },
 {
 "startTime": "2017-05-10T18:37:16.5762731Z",
 "endTime": "2017-05-10T18:37:19.248291Z",
 "code": "BadGateway",
 "clientRequestId": "e19871e7-746a-41bd-a8f0-0e176138b39f",
 "serviceRequestId": "e19871e7-746a-41bd-a8f0-0e176138b39f",
 "status": 502, 
 "message": "{\r\n \"code\": \"\",\r\n \"message\": \"This is the 
error message\",\r\n \"innererror\": {\r\n \"message\": \"This is the
inner error message\",\r\n \"type\": \"
System.ServiceModel.FaultException`1
[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, 
Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]\",
\r\n \"stacktrace\": \" at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Create(Entity entity, CorrelationToken correlationToken, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Dictionary`2 optionalParameters)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataExecutionContext.Create(Entity entity)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataServiceDataProvider.CreateEdmEntity(CrmODataExecutionContext context, String edmEntityName, EdmEntityObject entityObject, Boolean isUpsert)\\r\\n at Microsoft.Crm.Extensibility.OData.EntityController.PostEntitySet(String entitySetName, EdmEntityObject entityObject)\\r\\n at lambda_method(Closure , Object , Object[] )\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<GetExecutor>b__9(Object instance, Object[] methodParameters)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.Execute(Object instance, Object[] arguments)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()\"\r\n }\r\n}", 
 "source": "organisation123.api.crm6.dynamics.com", 
 "errors": [] },
 {
 "startTime": "2017-05-10T18:37:39.6648436Z",
 "endTime": "2017-05-10T18:37:41.9461642Z",
 "code": "BadGateway",
 "clientRequestId": "58ae54b4-0d34-4296-afca-0c6e2896633c",
 "serviceRequestId": "58ae54b4-0d34-4296-afca-0c6e2896633c",
 "status": 502, 
 "message": "{\r\n \"code\": \"\",\r\n \"message\": \"This is the 
error message\",\r\n \"innererror\": {\r\n \"message\": \"This is the
inner error message\",\r\n \"type\": \"
System.ServiceModel.FaultException`1
[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, 
Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]\",
\r\n \"stacktrace\": \" at Microsoft.Crm.Extensibility.OrganizationSdkServiceInternal.Create(Entity entity, CorrelationToken correlationToken, CallerOriginToken callerOriginToken, WebServiceType serviceType, Boolean checkAdminMode, Dictionary`2 optionalParameters)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataExecutionContext.Create(Entity entity)\\r\\n at Microsoft.Crm.Extensibility.OData.CrmODataServiceDataProvider.CreateEdmEntity(CrmODataExecutionContext context, String edmEntityName, EdmEntityObject entityObject, Boolean isUpsert)\\r\\n at Microsoft.Crm.Extensibility.OData.EntityController.PostEntitySet(String entitySetName, EdmEntityObject entityObject)\\r\\n at lambda_method(Closure , Object , Object[] )\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<GetExecutor>b__9(Object instance, Object[] methodParameters)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.Execute(Object instance, Object[] arguments)\\r\\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\\r\\n--- End of stack trace from previous location where exception was thrown ---\\r\\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\\r\\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\\r\\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\\r\\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()\"\r\n }\r\n}", 
 "source": "organisation123.api.crm6.dynamics.com", 
 "errors": [] }
 ] 

 

Until next tip… be good!

How to set LockDuration on an Azure ServiceBus queue with PowerShell

Hi again,

Been doing some heavy integration work for a client, and we are using an Azure ServiceBus queue to hold our messages for a Logic App to process.

The workflow in the Logic App is complex and needs to query quite a few external services. The number of calls and a for-each loop means that the execution duration of the Logic App has, on many occasions, exceeded the default 1 minute lock duration of a ServiceBus queue.

I know that there is the ability to take a message off a ServiceBus queue, but for reasons which I’ll leave for maybe another blog post, we decided only to peek at the message and leave it on the queue for a re-execution if the Logic App processing failed for any reason.

So, being good DevOps as we all are, the deployment was scripted. However, I wanted to script the creation of the Azure resources individually, and not as the Resource Group deployment model that the Logic App template uses.

With the Azure Resource Manager PowerShell cmdlets, you use easily create a ServiceBus namespace and a queue.

However, it appears that there is a bug with the Set-AzureRMServiceBusQueue cmdlet in that neither the LockDuration nor the MaxDeliveryCount properties of a ServiceBus queue can be set to anything other than their default values.

(PowerShell snippet)
# Check if queue already exists
$queueName = 'myQueueName'
$currentQ = Get-AzureRmServiceBusQueue -ResourceGroup $resourceGroupName -NamespaceName $servicebusNamespace -QueueName $queueName;
if($currentQ)
{
 Write-Host "The queue $queueName already exists";
}
else
{
 Write-Host "The $queueName queue does not exist.";
 Write-Host "Creating the $queueName queue";
 New-AzureRmServiceBusQueue -ResourceGroup $resourceGroupName -NamespaceName $servicebusNamespace -QueueName $queueName -EnablePartitioning $False;
 $currentQ = Get-AzureRmServiceBusQueue -ResourceGroup $resourceGroupName -NamespaceName $servicebusNamespace -QueueName $queueName;
 Write-Host "The $queueName queue in Resource Group $resourceGroupName has been successfully created.";
}
# Set queue properties
$currentQ.DeadLetteringOnMessageExpiration = $True;
$currentQ.MaxDeliveryCount = 10;
$currentQ.MaxSizeInMegabytes = 1024;
$currentQ.LockDuration = 300;

Set-AzureRmServiceBusQueue -ResourceGroup $resourceGroupName -NamespaceName $servicebusNamespace -QueueName $queueName -QueueObj $currentQ;

 

For LockDuration, I’ve even tried the different formats that I know of;

  • “00:05:00”
  • “PT05S”
  • 300, either as integer or string, for 300 seconds

None of these formats appeared to make any difference to the LockDuration property!!!

I carefully checked the PowerShell output from the Set-AzureRmServiceBusQueue call and noticed that the LockDuration property was blank! Oh no! Something is not right here!

AzureServiceBusQueueLockDuration_NotSet

 

Solution:

To change the LockDuration of a ServiceBus queue, you need to use the Microsoft.ServiceBus assembly in your PowerShell script and use its NamespaceManager class. I placed the assembly in the same folder as the PowerShell script.

(PowerShell snippet)
Add-Type -Path ".\Microsoft.ServiceBus.dll"

# Need to get SAS token for "RootManageSharedAccessKey" on Service Bus itself
$keys = Get-AzureRmServiceBusNamespaceKey -ResourceGroup $resourceGroupName -NamespaceName $serviceBusNamespace -AuthorizationRuleName "RootManageSharedAccessKey"
$serviceBusConnectionString = $keys.PrimaryConnectionString

# LockDuration is a TimeSpan type. Set it to 5 minutes.
$lockDuration = [System.TimeSpan]::FromSeconds(300)

$queueName = 'myQueueName'
$currentQ = New-Object -TypeName Microsoft.ServiceBus.Messaging.QueueDescription -ArgumentList $queueName

# Check if the queue already exists
$NamespaceManager = [Microsoft.ServiceBus.NamespaceManager]::CreateFromConnectionString($serviceBusConnectionString);
if ($NamespaceManager.QueueExists($queueName))
{
 Write-Host "The queue '$queueName' already exists"
 Write-Host "Ensuring '$queueName' properties"
 $currentQ = $NamespaceManager.GetQueue($queueName);
 $currentQ.LockDuration = $lockDuration
 $currentQ.EnablePartitioning = $False
 $currentQ.EnableDeadLetteringOnMessageExpiration = $True
 $currentQ.MaxDeliveryCount = 10
 $currentQ.MaxSizeInMegabytes = 1024 
 $currentQ.EnableExpress = $False
 $currentQ.EnableBatchedOperations = $True 
 $currentQ.RequiresSession = $False 
 $currentQ.RequiresDuplicateDetection = $False
 $NamespaceManager.UpdateQueue($currentQ);
}
else
{
 Write-Host "The '$queueName' queue does not exist"
 Write-Host "Creating the '$queueName' queue"
 $currentQ = New-Object -TypeName Microsoft.ServiceBus.Messaging.QueueDescription -ArgumentList $queueName
 $currentQ.LockDuration = $lockDuration
 $currentQ.EnablePartitioning = $False
 $currentQ.EnableDeadLetteringOnMessageExpiration = $True
 $currentQ.MaxDeliveryCount = 10
 $currentQ.MaxSizeInMegabytes = 1024 
 $currentQ.EnableExpress = $False
 $currentQ.EnableBatchedOperations = $True 
 $currentQ.RequiresSession = $False 
 $currentQ.RequiresDuplicateDetection = $False

 $NamespaceManager.CreateQueue($currentQ);
 Write-Host "The $queueName queue in Resource Group $resourceGroupName has been successfully created.";
}

 

Now, when you check the LockDuration property in the output, you see that the property is now correctly set.

AzureServiceBusQueueLockDuration_CorrectlySet

A bug in the Set-AzureRmServiceBusQueue cmdlet perhaps ….

Until next tip… be good!

Logic Apps and the Succeeded Terminate Action

More and more of my integration efforts are centring around the use of Logic Apps on the Azure platform.

Logic Apps offers a great number of connectors to perform calls to external services such as Dynamics CRM, Office 365, Slack and the list goes on. A full list can be found on the following link … Azure Logic Apps Connectors

My client had a particular requirement which basically stated that if a value existing in a certain field, then the workflow should not proceed any further. And the subsequence of this was that the Logic App had to terminate “successfully” (i.e., “Succeeded”, not “Failed” nor “Cancelled”).

The Logic Apps designer allows you to add the Terminate Control action simply by searching for “terminate” when adding your action. 

At the time of authoring this blog post (Edit: I started writing this blog post on 3rd April, but had not published it! Apologies if this led to confusion that I wrote this in May), you could not select “Succeeded” in the designer. You had to switch to Code View and set the properties correctly by changing “Failed” to “Succeeded” AND by removing the result code too!

Thanks to the constant updates by the Logic Apps team, this issue has been fixed and is a just memory …

Until next tip… be good!
 

BizTalk Server Performance Tips: #5 – Host Optimisation

Hi again! My last post on Host Separation https://johnbilliris.wordpress.com/2014/04/29/biztalk-server-performance-tips-4/ was concerned about the practice of separating your hosts into their logical role within the application (send, receive etc) as well as by application. “What next?” I hear you ask… Here’s the opportunity to improve your application’s performance by the following tweaks that you can apply onto your hosts:- 1. Optimise Host Polling Intervals Now you can optimise the polling interval your hosts for messaging or for orchestration based scenarios. If, for example, you have the need for “important” orchestrations to be executed “faster” (i.e. less time spent in MsgBoxDB), then you can even decrease the polling interval further (the minimum being 50ms) The following table gives you an indication of values you can set to optimise the intervals.

Host Messaging Interval Orchestration Interval
“Important” Orchestration Host 250000 50
“Normal” Orchestration Host 250000 500
Send Host 50 250000
Tracking Host 500 250000

These setting are found on the Hosts | General tab in the BizTalk Administration console.   2. Disable Host Tracking Host tracking should be disabled as a general rule of thumb. The exception being if you need to track, and that should be in the dedicated tracking host. This is done by de-selecting the checkbox on the Hosts | General tab in the BizTalk Administration console.   3. Increase Host Instance Threading Values For non-tracking host instances, the default .Net CLR threading values should be increased to improve throughput. These values are found in the Host Instance | .NET CLR tab within the BizTalk Administration Console.

Threading Setting Suggested Value
Maximum Worker Threads 100
Minimum Worker Threads 25

4. Increase Host Process Virtual Memory Throttling Boundary By default, hosts are set to throttle to conserve system resources when their virtual memory allocation exceeds 25% of the available memory of the BizTalk application server. This system default is too low and may cause intermittent processing latency due to BizTalk being in a throttling state. For each of your hosts (except the Tracking host), increase the value to 75%. The process virtual memory setting is found on the Host | Resource-Based Throttling tab within the BizTalk Administration Console.   5. Increase Host Message Queue Size and In-Process Message Count For each of your hosts (except the Tracking host), increase the values as shown in the table below.

Setting Suggested Value
Internal Message Queue Size 10000
In Process Messages 10000

These settings can be found on the Hosts | Resource-Based Throttling tab in the BizTalk Administration console.   I hope these tips help bring out that little (or large) extra bit of performance out of your BizTalk application. Until next tip… be good!

BizTalk Server Performance Tips: #4 – Host Separation

This is more of a recommended practice from me rather than a performance tip, and I know that this topic has been touted by many BizTalkers.

I believe it is very important to separate your hosts by the purpose they serve; not only will this allow for security and performance but also from an operational point of view.

The key separation criteria are:-

  • functionality (receive, send, orchestration and tracking)
  • isolation (per application)

Furthermore, by having separate hosts, you have the opportunity to tune the host specifically for the task at hand; for example, you can optimise the polling interval for the host for messaging scenarios. A bonus to performance without dragging other scenarios (e.g. orchestrations) into the mix and thus not degrading your BizTalk performance.

From an operational point of view, having separated hosts allows for Production Support / BAU work to be carried out with minimal impact on other BizTalk applications. My experience has shown me that I’m usually NOT the only BizTalk consulting company or consultant who’s provided a BizTalk application for the given client; I am one of several, and adhering to this practice ensures that I’m a “good citizen”.

I urge you to read Tord Glad Nordahl’s groovy post over at  http://biztalkadmin.com/biztalk-health-check-part-2/ on this topic.

Until next tip… be good!

BizTalk Server Performance Tips: #3 – Increase HTTP Outbound Connections Limit

Another quick tip for a performance improvement in BizTalk especially when using HTTP to communicate to other web services (in other words, you are using SOAP or WCF based Send Ports).

By default (thanks Microsoft!), a process will only allow two simultaneous outbound HTTP connections to any particular domain (eg, http://www.wordpress.com). This can severely impact performance.

The key attribute to adjust here is maxconnection and can be found on the <ConnectionManagement> element in Machine.config. The default values are as follows.

<connectionManagement>
  <add address="*" maxconnection="2"/>
</connectionManagement>

The default value should be increased in the BizTalk process configuration files – rather than the Machine.config file – and I suggest that it be set it to 12 times the number of CPUs.

Thus, the resultant config entry becomes …. (the IP address has been changed to protect the innocent).

<system.net>
 <connectionManagement>
  <add address="127.0.0.1" maxconnection="24"/>
 </connectionManagement>
</system.net>

 

Until next tip… be good!