API Management concurrency control

As I was mentioning in the last post with concurrency control for Logic App, it’s important in the distributed world today to make sure to not flood our backend services since there are more restrictions in the shared and distributed landscape than it has been in the good old days OnPrem where we had full control over all our services. That means that we need to have control over our integration solutions so they don’t spin away like crazy and causes unwanted flooding to our backend systems.

Historically we could control the request towards our backends system with Rate-Call-Limit and Qutoas to make sure that a maximum of request was made by a consumers during a time period, that could be minutes,hours,days,months.

But in the latest release the API Management the team added amongst other concurrency control. This means that we now also can control/restrict the number of concurrent calls to our backend service.

I.e. we have a SaaS service that allows 20 concurrent calls, all calls without that limit will be declined. This was hard to control from API Management and all the responsibility was left to the consumers. We had a case where the consumer push 1000 calls in a few miliseconds, distributed world is lovely! But the backend service couldnt handle it and most of the failed and a new batch was fired and yes the story repeated itself. It ended with the consumer needing to change their solution (yes I agree that was the best way) but we had no chance off protecting the backend from the flood except Rate-Limit but that was not an optimal solution.

Now whit Concurrency control we can garantée that the backend system don’t have more than 20 concurrent calls and all calls after are queued up to a maximum limit, and processed right away as one of the 20 concurrent calls are finished. Making sure to not waste resoruces in retrys and idle time. Using the Concurrency control is simple here is the policy: read more here

<limit-concurrency key="expression" max-count="number" timeout="in seconds" max-queue-length="number">
        <!— nested policy statements -->  

So let’s show a sample, I’ve created a Logic App to act as a backend service, it hasen’t mutch logic but a delay to simulate really heavy work (20 seconds)

Part 1: A simple flow will be a simple Logic app to Logic App with no restrictions:

And now let’s see how this looks like when consumed by another Logic App in a foreach loop (without any concurrency restrictions).

As assumed the execution takes abit longer than 20 seconds:

Part 2: Adding concurrency control by exposing the Backend Logic App via API Management

The Logic App is imported to API Management and the policy Concurrency Control is added:

And the policy looks like:

		<base />
		<rewrite-uri id="apim-generated-policy" template="?api-version=2016-06-01&amp;sp=/triggers/request/run&amp;" />
		<set-backend-service id="apim-generated-policy" base-url="https://prod-40.westeurope.logic.azure.com/workflows/24b778e6805344e686536a203ee47bce/triggers/request/paths/invoke" />
		<set-header name="Ocp-Apim-Subscription-Key" exists-action="delete" />
		<limit-concurrency key="constantstring" max-count="2" timeout="120" max-queue-length="100">
			<forward-request timeout="120" />
		<!--{ "azureResource": { "type": "logicapp", "id": "/subscriptions/c107df29-a4af-4bc9-a733-f88f0eaa4296/resourceGroups/Concurrency/providers/Microsoft.Logic/workflows/INT0002-SingelInstance/triggers/request" } }-->
		<base />
		<base />

To demonstrat the functionallity a new Logic App is now used , same logic as before but using the API Management to access the backend Logic App.

Executing the Logic App will now take 4 minutes! since the concurrency is set to 2.

So that is great now we have prevented to many concurrent calls to our backend system, but is it all good? Let’s look in to the log of our “backend system”

We can see that there are 2 failed runs and 22 in total runs, but as you remember the list had 20 records so we only wanted 20 request to our “backend system”.

The failed ones has failed due to timeout and that will also make the Logic App executing a retry, ending with a total of 22 calls to the backend.

Summary: This is truly a nice feature that we will use alot but we also need to understand the nature of the API and how to configure it correctly to prevent unwanted behavior. In a flow where resending the information is not allowed, the queue lenght should be set as low as possible to prevent the retry behavior of clients resending the information even if it’s not wanted. Or we might want to use other techniques for that kind of flows.

All in all a great feature and wery usefull one, but as allways make sure to understand the feature and the requirements of the protected API when configuring it.

Posted in: | API Management  | Tagged: | API Management  | Integration 

Logic Apps concurrency control

In a distributed world with increasing amount of moving parts we need to make sure to not flood our services since there are more restrictions in the shared and distributed landscape than it has been in the good old days where we had full control over all our services. That means that we need to have control over our solutions so they don’t spin away like crazy.

On this topic Logic Apps has earlier had problem, when looping over an array in a foreach loop and calling a connector we could choose between 20 concurrent actions or sequentially call it one by one. This mean that if 20 parallel actions was flooding the destination we had to make it to sequantial call the destination and therefore also taking alot longer. When working with batches this is not always possible since it might take to long time, so sometimes we had to create two or more parallel flows in Logic Apps to make suer that we could keep the time limit.

Making the flow complex and look “uggly”:

Newly released concurrency control!

But fortunately there is now a newly released object property on the for each action that will help us, only available in CodeView for now but that will work just fine.

The new property looks like:

"runtimeConfiguration": {
    "concurrency": {
        "repetitions": 2

And setting the repetitions to 2 will be the more or less the same as above solution in execution but it will be mutch easier to maintain and work with the Logic App now.

The foreach now looks like: (removed the body to the function for clarity reasons)

"For_each": {
    "actions": {
        "INT0020-Update-Fuse-Users": {
            "inputs": {
                "body": {
                "function": {
                    "id": "/subscriptions/fakeee-15f5-4c85-bb3e-1e108dc79b00/resourceGroups/rgroup/providers/Microsoft.Web/sites/appname/functions/INT0020-Update-Fuse-Users-Compare"
            "runAfter": {},
            "type": "Function"
    "foreach": "@body('INT0020-Split-Array')",
    "runAfter": {
        "GET_FUSE_TOKEN": [
    "runtimeConfiguration": {
        "concurrency": {
            "repetitions": 2
    "type": "Foreach"

Logic App in the designer will now look like:

Triggers also have concurent control

We can also set this on triggers, sample bellow is on a recurrent trigger (or polling trigger withou spliton) will look like;

"trigger": {
"recurrence": {},
"runtimeConfiguration": {
    "concurrency" : {
    "runs": 10

For more detailed explanation on Trigger handling I suggest you read this great post from Toon Vanhoutte

I’m really happy that the product group finally has released this since it’s such an important part in our new world of ditributed landscapes and our “new” responsibility and interest in making sure our solutions are not flooding the destionations. We got the power and now we got the possibility to restrain them to the appropiate amount.

Posted in: | LogicApps  | Tagged: | Logic Apps  | Integration 

OMS and Non Events

When it comes to monitoring a integration flows, there are several types of monitoring that we need to cover, just the other day I got the requirement to make sure that a Logic App has been run atleast one time during a 24 hour period.

To solve this I started to look at possible solutions and turning my head to the Alert section of Logic Apps Diagnostics since it has the possibility to create a alarm based on input as number of runs, number of failed runs etc. unfortunally for me the max time limit was 6 hours. In this solution we are using Log Analytics and OMS as monitor tool, using the new Logic Apps Gallery it’s super nice (guess I need another post on this later on). Anyway there is alarm functionality inside the OMS Portal so I started to look in to that.

Alarms in OMS is easily created based on a Search so first of we need to create a search, this is done in the search area and easiest way is to click your way to a start of the search and then change the last parts manually, I needed to check that a workflow was executed and ended sucessfully. So my query ended up like this: (easy to reuse, just change the resource_workflowName_s to the name of your Logic App.

search * | where ( Type == "AzureDiagnostics" ) | where OperationName == "Microsoft.Logic/workflows/workflowRunCompleted" | where status_s == "Succeeded"   | where ( resource_workflowName_s == "INT002_Update_Work_Order2" )  

This query will return the results of succesfully runs, and we will be able to use it in our alarm.

Setting up the alarm is rather simple, just Add Alert Rule and fill in some information about the alarm, the important parts are the Search Query here you shall choose the saved query above, Time Window this is how far back we will look, so for our case we look 24 hours back. Alert Frequency is another time slot and it’s how often to check this rule, based on our case we wanted to check every hour. But in order for us to not get a new alert every hour (seems unnessisary to get 7 alarm emails during th night we can also specify Spuress alerts this will make sure that an alarm will not be sent out more often than every 6 hours.

So in reality we can now be sure that at most we will be alerted within 25 hours from the last run that there has not been any more runs and we will be alerted again every 6 hours until we have sucessfully completed a run.

There is possibility to invoke webhooks, Runbook or ITSM actions but in our case an email is good enough so we will use that. The email sent out is not containing that mutch information but it’s enought for us, we now that the flow has not been working and most likely the sending system has failed to send their file.

The email:

Monitoring for events that should happen but are not happening is always tricky and has historically always been hard to solve, so often collegues or other people in your organisation creates tasks for them to do daily checks of some sort. It can be small things like checking that a folder is empty or a log for a row with todays date on it. Even if the task itself is easy it burdens the people doing it and as more of these “small” tasks are added the more time it consumes, time these peoople should have spent working on more important tasks. So to be able to handle monitoring of predicted events and get an alarm we can give services of great value to these people, now we notify them if anything is stopped or so and they can focus on their work. When they can rely on the things we build, we are not only building robust integrations we are then also building trust and a service of great value.

It’s often the small things that makes the most.

Posted in: | OMS  | Tagged: | Logic Apps  | OMS  | Monitoring  | Integration 

New blog

Now I thought it was time to create my own blog.

As you might now I’ve been blogging alot on my work blog, http://blog.ibiz-solutions.se/.

So now I’ve felt that I’m ready for my own blog. I’ll keep with the same line, bloggin about Integration and most part will most surely be about Integration in Azure.

I’ll cover updates on the source projects I contriubute to here i.e Logic App Template Creator and some other new fun projects, so stay tuned and poke me if I’m keeping a to low pase on my blog.

Hope to see you around!


Posted in: | Thinking  | Tagged: | Thinking  | Integration