API Management and Auth0 Consumer Friendly API's

Today we will extend our setup with API Management even further. Focusing on consumer friendly API’s and in this setup that is knowing our user more. In this example we will add the customer id to the consumer representation in Auth0 and use this id to give the consumer an operation called My orders. No need for knowing or using or adding an id in the request. The id will be added to the token from Auth0 and then extracted in API Management and sent downstreams to filter the result set.

An Identity provider is a service that a user/application is signing in to (just like Azure AD) and this provider has functionality to provide needed information and grant access to requested resources that the IDP is handling. Just like the fact that you have access to a resource or resource group inside your subscription in Azure.

In API Management we have a trust setup as of the previus post Setup Auth0 with API Management. Then we added RBAC permission in the second post RBAC with API Management where we added functionality to provide granular control over who has access to what operation. Now we will continue this and focus on extending this setup with adding a customerid to our consumer representation i Auth0 and then extract that id in API Management to enforce that they only can access these customers orders. Providing an operation to get just my orders without any additional parameters. This is achieved via the validate-jwt policy and then reading the JWT token custom claim and store it in a variable for later use.

<set-variable name="customerid" value="@(((Jwt)context.Variables["token"]).Claims.GetValueOrDefault("https://mlogdberg.com/customerid","-1"))" />

The end scenario will look like the illustration bellow where we can extract the customerid from the token and use it later down the path.

Scenario image

I’ve created a video that will go thru all of this a link is provided bellow.

Azure API Management and OAuth with Auth0

Links used in the video:

Summary:

Beeing a great API publisher is not as easy at it first looks like. Always striding for better and easier to use API’s is a good thing and here we show how to move more logic inside the setup rather than at the consumer. If we can prevent the consumer from needing to filter data and/or manage identifiers and so forth we can make it easier for the app builders that consume our api’s and provide speed and agility to those teams. At the same time we can create more secure and robust API’s with more automated testing. There are more ways to use this and interested to see what you will find most usefull.

Posted in: | API Management  | Tagged: | API Management  | Integration  | Serverless  | Auth0  | IDP  | OAuth 


RBAC with API Management

Extending our setup with API Management and Auth0 to create RBAC control over our operations in our API’s. The idea is to make sure that consumers can be granted access to specific operations in an API and not only to the whole API. This can be achieved in a number of ways and in this topic we will use Auth0 as the Identity Provider to help out with this, but you could use any other of your choice.

An Identity provider is a service that a user/application is signing in to (just like Azure AD) and this provider has functionality to provide needed information and grant access to requested resources that the IDP is handling. Just like the fact that you have access to a resource or resource group inside your subscription in Azure.

In API Management we have a trust setup as of the previus post Setup Auth0 with API Management. Then we add permissions to our representation of the Api Management instance in Auth0 and grant the conusmer only the permissions that we want to give. After the permissions is setup and granted we willenforce these permissions in our API Management instance via the validate-jwt policy.

The end scenario will look like the illustration bellow where we can grant access to some or all operations in our API.

Scenario image

I’ve created a video that will go thru all of this a link is provided bellow.

Azure API Management and OAuth with Auth0

Links used in the video:

Summary:

There is alot of times where extra granularity is needed in the API’s exposed. It helps out with creating easier to use and more natural API’s. I can be a good selpoint if used to show that there are more features for premium customers. Regardless adding RBAC to your API’s will increase seurity and eas of use. It will also give you a possibility to move the approving of new customer out of the API Managament instance and become more of a business thing to increas new consumer adoption. The setup used in Auth0 is very powerfull and can be reflected on end users aswell and that can be very helpfull when adding more lightweight consumers with single purposes.

Posted in: | API Management  | Tagged: | API Management  | Integration  | Serverless  | Auth0  | IDP  | OAuth 


Setup Auth0 with API Management

API Management is an awesome API gateway with functionality to really excell in exposing API’s to consumers. When it comes to security there are several options and today we will look in to the OAuth. In order to do this we need an IDP (Identity Provider) that we can configure a trust releationship with.

An Identity provider is a service that a user/application is signing in to (just like Azure AD) and this provider has functionality to provide needed information and grant access to requested resources that the IDP is handling. Just like the fact that you have access to a resource or resource group inside your subscription in Azure.

In API Management a trust to an IDP and creation of a validation of the JWT provided from the IDP is done easily via the restrict policy called validate-jwt

Let’s go thur how the setup looks like, we will need to set up a Trust between your API Management instance and your Auht0 instance.

Scenario image

I’ve created a video that will go thru all of this a link is provided bellow.

Azure API Management and OAuth with Auth0

Links used in the video:

  • Validate JWT Token
  • Auth0 Openid Configuration url: https://YOUR_AUTH0_DOMAIN/.well-known/openid-configuration

Summary:

Adding a second security layer like this increases security and as you will see later on flexibility. It’s an awesome start in order to build a nice consumer experience for your API’s. In API Management it’s very easy to attach any IDP so you can pick and choose your favourite and the setup will be somehwat similar.

Posted in: | API Management  | Tagged: | API Management  | Integration  | Serverless  | Auth0  | IDP  | OAuth 


Azure Functions logs in Application Insights part 2

In part one I covered the ILogger interface and what it’s providing for us, but sometimes we want more control of our logging, enrich oru logging or we have allready implemented alot of logging with TelemtryClient and just want to connect the logging to our end-to-end experience.

We use the same Scenario, a message is recieve via API Management, sent to a Azure Function that publishes the message on a topic, a second Azure Function reads the published message and publishes the message on a second topic, a third Azure Function recieves the message and sends the message over http to a Logic App that represents a “3:e party system”.

Scenario image

Read more about the scenario background and the standard ILogger interface in previous post Azure Functions logs in Application Insights.

Connecting custom logs from TelemetryClient

If we need more custom logging options than the provided in the standard ILogger provides or we have alot of logging today with TelemetryClient that is not connected to the endo to end scenario. We need attach these to our end-to-end logging experience via the TelemtryClient to take advantage of the more advanced logging features.

In order to achieve this we need to add some context properties to our instance, here is one way to do this via Service Bus, here we recieve the Message object.

From the Message object we can extract the Activity, this is an extension found in the Microsoft.Azure.ServiceBus.Diagnostics namespace. From the result of this operation we can get the RootId and ParentId wich is the values for the current Operation Id and the parent id. Read more on how the Operation id is built and how the hierarchy is designed

public string Run([ServiceBusTrigger("topic1", "one", Connection = "servicebusConnection")]Message message, ILogger log)
{
    string response = "";
    var activity = message.ExtractActivity();

    telemetryClient.Context.Operation.Id = activity.RootId;
    telemetryClient.Context.Operation.ParentId = activity.ParentId;

    telemetryClient.TrackTrace("Received message");

This can also be done from a HTTP call read more.

// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if (context.Request.Headers.ContainsKey("Request-Id"))
{
    var requestId = context.Request.Headers.Get("Request-Id");
    // Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
    telemetryClient.Context.Operation.Id = GetOperationId(requestId);
    telemetryClient.Context.Operation.ParentId = requestId;
}
	
//GetOperationId method
public static string GetOperationId(string id)
{
    // Returns the root ID from the '|' to the first '.' if any.
    int rootEnd = id.IndexOf('.');
    if (rootEnd < 0)
        rootEnd = id.Length;

    int rootStart = id[0] == '|' ? 1 : 0;
    return id.Substring(rootStart, rootEnd - rootStart);
}

Now we will also add this small line of code that trackes an event.

var evt = new EventTelemetry("Function called");
evt.Context.User.Id = "dummyUser";
telemetryClient.TrackEvent(evt);

And this can now be found in the end-to-end tracing.

Event in Overview Event in Telemetrylist

Adding an Extra “Operation”

This is probobly my favorit thing in all of this, when processing messages some parts of the process are bigger and/or more important to understand, this could be a complex transformation or just a set of steps that we want to group. Then we can use the StartOperation in TelemetryClient, this will start the scope for the Operation and it will be open until we execute the operation StopOperation it has a status and we can set some properties and get execution time of the process.

An Operation is either a DependencyOperation, intended to mark dependency to other resources or a RequestOperation meant for marking a creation of a request. I will use a DependencyOperation in my case since it’s the most alike operation for my purpose, I use it to mark that a more complex logic has been executed in my function and what the result of it was.

I use this in several ways one is when I process alot of items in my Function to mark that they are executed, like processing a record in a list, another is when performing a complex transformation and so on.

In my example bellow I’ll demonstrate the usage of an operation by doing a simple for loop and simulating processing of a line and adding some valuable metadata. As a tip set the Data property to the object that is going to be processed to easier understand what data is processed.

for (int i = 0; i < 10; i++)
{
    var op = telemetryClient.StartOperation<DependencyTelemetry>(string.Format("Function2_line_{0}", i.ToString()));

    op.Telemetry.Data = string.Format("Processing {0} and random guid {1}", i.ToString(), Guid.NewGuid().ToString());
    op.Telemetry.Type = "LineProcessor";
    op.Telemetry.Success = i < 9;
    op.Telemetry.Sequence = i.ToString();
    op.Telemetry.Context.GlobalProperties.Add("LineID", i.ToString());
	
	//do some heavy work!
	
    telemetryClient.StopOperation(op);                    
}

This will result in a mutch more detailed log in the End-To-End overview

Operation in Overview

Custom properties is easily set on each operation for traceability or information, bellow is how we can add LineID as a custom property.

op.Telemetry.Context.GlobalProperties.Add("LineID", i.ToString());

Operation and custom Property

We can also set the Operation to have failed even if we continued the run, good for highlight parts of failure while still needing to complete the rest of the run.

Operation and custom Property

Then we can find it under failures and from there start working on how to fix it.

Operation and failed operations

Read more:

Summary:

In this second part we have covered more advanced logging with TelemetryClient or to connect your current TelemetryClient logging to the End-To-End experience provided by Application Insights.

The StartOperation is really powerfull to highlight what is happening, but I would like to have a third option for an Operation since using the Dependency sounds wrong when it’s a section of the code in the same process, but it works!

This post is about how things can be done and hopefully this can be a guidance in creating a better logging experience, we want to avoid the developers “let me just attach my debugger” response to what is the problem with the flow running in production.

Posted in: | Azure Functions  | Application Insights  | Tagged: | Azure Functions  | Integration  | Serverless  | Application Insights 


Azure Functions logs in Application Insights

Azure Functions is a great tool in our toolbox and as all our tools they have their strengths and flaws. When it comes to logging and monitoring Functions rely on Application Insight’s and later on Azure Monitor. It’s also alot dependent on how the implement of your solution, and there are som out of box features that are really amazing.

If you are unfamiliar with Functions and Application Insights, read more here: https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring

By just enabling application insights, without any further addons we get all the logs that are sent to the ILogger object to application Insights. And we also get some End-To-End tracking via supported SDK’s and HttpClient requests. This is very useful to get an understanding of what is happening. So let’s look in to how this manifests in a rather complex scenario.

The Scenario is that a message is recieve via API Management, sent to a Azure Function that publishes the message on a topic, a second Azure Function reads the published message and publishes the message on a second topic, a third Azure Function recieves the message and sends the message over http to a Logic App that represents a “3:e party system”.

Scenario image

After connecting all functions to the same Application Insight instance we are sending a message! Let’s see how this can look like in Application Insights, first we go to the Performance tab:

Performance tab

First let’s just point out some small good to know things, the Operation Name in this view is actually the name of the function that you execute, so Function 1 in the list of Operation Names bellow:

Operation Name

Has this signature in the code FunctionName(“Function1”), this is important as your instances grow make sure the name of the function is unique to easily find em.

 [FunctionName("Function1")]
        [return: ServiceBus("topic1", Microsoft.Azure.WebJobs.ServiceBus.EntityType.Topic, Connection = "servicebusConnection")]
        public static async Task<string> Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
            ILogger log)
        {

Then we pick one of the requests, press the samples and pick one, it loads the end-to-end overivew

End to end Overview

We can actually see all the steps from the first function to the last function and the call to the Logic App, impressive! The message is beeing sent over Service bus topics but the correlation is all set and we can follow the message all the way!

So this is truly awesome and alot of standard Properties are set out of box, but none specific to our request so it’s hard to understand what data is sent so let’s look in some special ways of adding custom information to the logs.

Logs in Application Insights

Out of box logs is sent to Application Insights via the ILogger object so let’s take a short look at the interface, it contains 3 different operations, Log that logs some information, there are different types and extensions on this to make it easier to use. An isEnabled check to see if the logger is enabled and last a BeginScope operation that will create a scope for our log entries. Read more

public interface ILogger
{
    //
    // Summary:
    //     Begins a logical operation scope.
    //
    // Parameters:
    //   state:
    //     The identifier for the scope.
    //
    // Returns:
    //     An IDisposable that ends the logical operation scope on dispose.
    IDisposable BeginScope<TState>(TState state);
    //
    // Summary:
    //     Checks if the given logLevel is enabled.
    //
    // Parameters:
    //   logLevel:
    //     level to be checked.
    //
    // Returns:
    //     true if enabled.
    bool IsEnabled(LogLevel logLevel);
    //
    // Summary:
    //     Writes a log entry.
    //
    // Parameters:
    //   logLevel:
    //     Entry will be written on this level.
    //
    //   eventId:
    //     Id of the event.
    //
    //   state:
    //     The entry to be written. Can be also an object.
    //
    //   exception:
    //     The exception related to this entry.
    //
    //   formatter:
    //     Function to create a string message of the state and exception.
    void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter);
}

Let’s look at the simplest one just adding a Log text with LogInformation (an extension on Log)some unique data this is to log our productnumber, so we can see what productnumber this request is processing:

	log.LogInformation("ProductNumber {productnumber}", productnumber);

Let’s move to the collected Telemetry to see this log in Azure Portal:

Scenario image

And now we can find our log row:

ProductNumber log row

So now we know this request is processing the specific product, this is searchable in the Search Menu, and an easy click will take you to the End-to-End tracking.

ProductNumber log row

Create a log scope with BeginScope

Sometimes we need to add some data that will be added as properties to all upcoming logs, this can be done via the BeginScope, with the following line all upcoming logs will automatically have the market and productnumber added as properties, the are added as with a prefix of prop__

	using (log.BeginScope("Function3 processed message for {market} with productnumber {productnumber}", market, productnumber))
	{
		log.LogInformation("Processing Lines");
	}

Upcoming Log entry will have prop_market and prop_productnumber.

Properties added with Begin Scope

One good use is to do looping over multiple items, and therefore all entries will have these properties added, also all other properties added with the {} format in log.Information as lineNumber and guid in bellow sample will be added in the customDimension with prefix prop__

	 log.LogInformation("Processing {lineNumber} and random guid {guid}",i.ToString(),Guid.NewGuid().ToString());

And if we want to loop over multiple lines we can add an extra scope to make sure we log the specific unique data for each iteration, following we log the lineNumber and a random guid and as you can see we still have the productnumber and market from before all with the prop__ prefix.

Properties added with Begin Scope

Code sample of the loop

	using (log.BeginScope("Function3 processed message for {market} with productnumber {productnumber}", market, productnumber))
    {
	    log.LogInformation("Processing Lines");
	    for (int i = 0; i < 10; i++)
	    {
	        using (log.BeginScope("Function3 Processing line {lineNumber}", i.ToString()))
	        {
	            log.LogInformation("Processing {lineNumber} and random guid {guid}",i.ToString(),Guid.NewGuid().ToString());
	        }
	    }
	}

Add Filterable Property to Request

Let’s say that we want to not just find a specific run, we want to understand overall performance or errors on a more wider term, like for a market. Let’s consider we want to understand performance or failures for a specific market. This is done via the Performance or Failures tabs, but in order to narrow down the data presented we need to be able to filter the data like the the image bellow adding a filter on Market.

ProductNumber log row

To get this working we need to add an property to our request log. (The request log is the first log event, see the telemetry for more information) the code to achieve this is super simple we just need to add this small snippet of code:

System.Diagnostics.Activity.Current?.AddTag("Market", market);

The function now looks like:

  public static class Function1
    {
        [FunctionName("Function1")]
        [return: ServiceBus("topic1", Microsoft.Azure.WebJobs.ServiceBus.EntityType.Topic, Connection = "servicebusConnection")]
        public static async Task<string> Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);
            string productnumber = (string)data.productnumber;
            string market = (string)data.marketnumber;

            log.LogInformation("ProductNumber {productnumber}", productnumber);
            System.Diagnostics.Activity.Current?.AddTag("Market", market);

           
            return JsonConvert.SerializeObject(data);
        }
    }

This will now give us the following result:

In the Search section we can pick the Market in the filter to build our queries.

Search with add filter Market

In the Search section we can pick the Market in the filter to build our queries.

Search with add filter Market

In the Performance and Failures sections we need to add a bit more to be able to work with the filter, we need to add customDimensions before Market making the filter look like customDimensions.Market.

Search with add filter Market

And as you can see we get the filtered result after pressing the green confirm button.

Search with add filter Market

Adding more Tags will allow more possibliites for search, the tag will be attached only to the request log entry and not to the following traces. But I would recomend logging both a id and a more broader value like market or similar to find errors/performace issues that span a broader range.

Summary:

I love developing in Azure Functions but in order to have a good Operations experience there is alot of things to consider in the logging strategy to make sure it’s easy to understand what is going on and how to take advantage of the statistics provided by Application Insights.

This post is about how things can be done and hopefully this can be a guidance in creating a better logging and searching experience, we want to avoid the developers “let me just attach my debugger” response to what is the problem with the flow running in production.

Posted in: | Azure Functions  | Application Insights  | Tagged: | Azure Functions  | Integration  | Serverless  | Application Insights 


August 2019 update Logic App Template Creator

There has been some time since last updates where announced on the Logic App Template Creator, but work has been ongoing thanks to contributors so time to form out the new updates.

There are some small changes and bug fixes as usual and fixed some issues reported but here is some new functionality.

The github repository is found here: https://github.com/jeffhollan/LogicAppTemplateCreator

Updates

  • ISE support, you can now extract Logic Apps in ISE for deployments
  • Extract custom Connector, you can now extract an ARM template for Custom Connectors
  • Extract Integration Account Schemas, you can now extract an ARM template for Integration Account Schemas.
  • Get the Deployed Logic App’s URL as an output parameter from the ARM template deploy (usable when URL is needed in nested ARM templates)
  • Improved Get-Help method to provide more valuable help and information
  • Extract Logic App that will be in disabled mode when deployed
  • Possibility to improved security around parameters and passwords generated in the ARM template and paramter files

Thanks to contributors!

Posted in: | LogicApps  | Tagged: | Logic Apps  | ARM Template  | Logic Apps Template Generator 


Resource Group Deployments 800 Limit fix

There is a limit of how many historical deployments that are saved for a resource group in Azure. Currently the limit is 800 and yet there are no out of box solution to start deleting these historical deployments when the limit is closing in. So what happens is that the next deployment, the 801:st is failing error code is DeploymentQuotaExceeded and message as follows:

Creating the deployment 'cra3000_website-20190131-150417-7d5f' would exceed the quota of '800'. The current deployment count is '800', please delete some deployments before creating a new one. Please see https://aka.ms/arm-deploy for usage details.

So to solve this we manually entered the resource group and deleted some deployments, but that isent what we want to do in the long run. So I looked around and found some powershell scripts I could reuse for this purpose and created a runbbook in an automation account.

I’m not no pro on automation account but I can set em up and use them, there might bsome of you who would give me a tip or 2 on this subject.

So when creating the account we need to use the Azure Run As account, since this will create an account i AAD and this will be used for access, by default the account will have contributor access on the whole subscription (if you have the permissions to create it) otherwise you need someone who can.

Create Automation aCcount

If we then look at the resource group we can find this account with contributor access, wich also means that if you only want it to be for sertain resource groups all you need to do is to restrict access for this user and when the script runs Get-AzureRmResourceGroup only the resource groups where this user has access to is returned. (by default the whole subscription)

Automation account contributor

So the script is simple and there might be some improvements with parallel actions needed for big environments but after running it on schedule for a while it shouldnt be a big issue.

All in all it collects all resource groups that the user has access to and start collecting deployments, in this script I always skip to run the script if there is less than 100 deployments made since it’s no pain in keeping these, it’s just painfull if we get over 800. But if we have more the script starts deleting deployments older than 90 days (by degault) to keep some history there, this can easily be changed in the script. This is run via a Powershell Workbook.

$daysBackToKeep = 90

$connectionName = "AzureRunAsConnection"
try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName  

    "Logging in to Azure..."
    (Add-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint ) | out-null

    "Obtaining Subscription information..."
    $context = Get-AzureRmContext
    Write-Output ("Subscription Name: '{0}'" -f $context.SubscriptionName)
}
catch {
    if (!$servicePrincipalConnection)
    {
        $ErrorMessage = "Connection $connectionName not found."
        throw $ErrorMessage
    } else {
        Write-Error -Message $_.Exception
        throw $_.Exception
    }
}

$ResourceGroups = Get-AzureRmResourceGroup

if ($ResourceGroupName)
{
    $ResourceGroups = $ResourceGroups | where { $_.ResourceGroupName -EQ $ResourceGroupName }
}

foreach( $resouregroup in $ResourceGroups) {
    $rgname = $resouregroup.ResourceGroupName
    Write-Output ("Resource Group: '{0}'" -f $rgname)

    $deployments = Get-AzureRmResourceGroupDeployment -ResourceGroupName $rgname
    Write-Output ("Deployments: '{0}'" -f $deployments.Count)

    if($deployments.Count -gt 100 ) {
        $deploymentsToDelete = $deployments | where { $_.Timestamp -lt ((get-date).AddDays(-1*$daysBackToKeep)) }
        Write-Output ("Deloyments to delete: '{0}'" -f $deploymentsToDelete.Count )
        foreach ($deployment in $deploymentsToDelete) { 

            Remove-AzureRmResourceGroupDeployment -ResourceGroupName $rgname -DeploymentName $deployment.DeploymentName -Force
            Write-Output ("Deloyments deleted: '{0}'" -f $deployment.DeploymentName )
        }
        Write-Output ("Deployments deleted")
    }
    else {
        Write-Output ("Less than 100 deployments to deleted, only {0}" -f $deployments.Count)
    }

}

Running it will start deleting deployments, so here is a pre run deployments history

Deployments history pre clear

And after we can see that history is deleted (for this run I just removed the 100 check)

Deployments history post clear

After running it the logs will look something like this:

![Runbook logs](/assets/uploads/2019/02/runbook_logs.png

Summary:

This limit is annoying and as I understand something that should be fixed over time, but until then we need to have these kind of helping scripts in environments with alot of deployments. So hopefully this helps out in the meantime.

Read more on the limits:

https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#resource-group-limits

Posted in: | ARM  | Tagged: | Azure  | ARM  | Azure Resource Manager  | Powershell  | Automation Account 


Fixing Function v2 add extension issues in portal

I’m out doing educations and it’s both fun and a great chance for me to learn new stuffs since it’s always something that just for no reason going south.

So today I was out on one of these missions and we where doing some labs developing functions, when we added an extra input from a table storage the Function App just chrashed and then we just got errors that the site was offline and it dident come up. Everybody encountered it so I had to investigate.

The response was a 503 Service Unavailable and the body was the following html content (removed the style since it just took alot of space):

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
    <title>Host Offline</title>
 </head>
<body>
    <div>&nbsp;</div>
    <div id="wrapper">
        
        <div id="page">
            <div id="content">

                <div class="box">
                    <h2>This Azure Functions app is down for maintenance</h2>
                    <p>
                        Host is offline
                    </p>
                </div>
            </div>
        </div>
    </div>
     
</body>
</html>

So what did we do to get here? Well we just wanted to add an input to from a Table Storage to a standard HTTP Trigger and we started from a blank new defualt function.

Here is my default http function: Defualt Http Trigger

Then I went to the Integrate pane and started to add the input from my Azure Table Storage

Integrate Pane

After selecting the Azure Table Storage we get an warning that the extension is not installed, let’s install it!

Install Storage Extension

It’s installing and we proceed as told in the message:

Install Storage Extension Message

So we just continued and filled the other details and pressed Save.

After that we went back to the function to add the input, it’s just a simple signature update from:

public static async Task<IActionResult> Run(HttpRequest req, ILogger log)

To:

public static async Task<IActionResult> Run(HttpRequest req,Newtonsoft.Json.Linq.JArray inputTable, ILogger log)

Now when we run the function the inputTable should be populated from the Azure Table Storage but when we try to execute it we just get a http error code 503 Service Unavailable and in the body I can see the text “Host Offline”.

503 Service Unavailable Host Offline

So we started to look into this and read that it could be due to extension not installed properly, to fix this we followed this guide: https://github.com/Azure/azure-functions-host/wiki/Updating-your-function-app-extensions

If you have the same problemes make sure that the extension file exists and contains atleast “Microsoft.Azure.WebJobs.Extensions.Storage if not create and/or add “Microsoft.Azure.WebJobs.Extensions.Storage to the file. Sample bellow:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <WarningsAsErrors />
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="3.0.0" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.0.*" />
  </ItemGroup>
</Project>  

If you are missing the file it can easily be created in the command prompt witb a simple script:

echo.>extensions.csproj

Run the build command (this takes a while to run so be patient):

dotnet build extensions.csproj -o bin --no-incremental --packages D:\home\.nuget

Restore extension build complete

So now the extensions is installed, let’s verify if this solves the issue completly? Start the Function App and test the function again.

If you are as unlucky as me and the function is not working we need to do one more thing. Let’s go back to Kudos and the CMD Prompt again, the problem is the app_offline.htm file:

Restore extension build complete

Delete it and go back and test agian, it works again!

Restore extension build complete

Summary:

The problem is that the extension is not properly installed and when we add the extension in the configuration without the extension installed the Function App chrashes as produces the app_offline.htm and if it’s present in the folder the response is defaulted to 503 and Host Offline, by removing the file the Function App starts executing as normal and if we have fixed the extensions no errors comes up. This works for all extensions.

By showing the problem and the reproduce scenario we can help out improving the products! So I hope this helps anyone and leads to a fix from the prodcut team.

Posted in: | Azure Functions  | Tagged: | Azure Functions  | Integration  | Serverless 


Logic App Template Creator Updates Januari 2019

Updates on the Logic Apps Template Creator has been published:

A minor thing for usage but great for consistency and quality in the generator is that there is now a build and release pipeline setup in DevOps.

  • Added Build & Release via DevOps to increase quality in merged sprints and automate release to PowerShell Gallery LogicAppTemplate
  • Improved support for Connectors to Storage Tables and Queues.
  • Added Commandlet to generate ARM Template for Integration Account Maps

Now the LogicAppTemplate is updated more frequqntly since I’ve added Build and Release setup to publish new releases to PowerShell Gallery.

PS> Install-Module -Name LogicAppTemplate

Or update to the newest version

PS> Update-Module -Name LogicAppTemplate

Storage Connector Tables and Queues

Now generated on the same way as Storage Blob Connector, meaning that the key will be collected duirng the deployment time based on the storage account name instead of needed to be provided as parameters. This will make it simpler and more neat to do deployments.

There are 3 parameters added

"azureblob_name": {
      "type": "string",
      "defaultValue": "azureblob"
    },
    "azureblob_displayName": {
      "type": "string",
      "defaultValue": "myDisplayName"
    },
    "azureblob_accountName": {
      "type": "string",
      "defaultValue": "myStorageAccountName",
      "metadata": {
        "description": "Name of the storage account the connector should use."
      }
    }

And they are later used in the connection to get the accountKey automatically during deployment.

 {
      "type": "Microsoft.Web/connections",
      "apiVersion": "2016-06-01",
      "location": "[parameters('logicAppLocation')]",
      "name": "[parameters('azureblob_name')]",
      "properties": {
        "api": {
          "id": "[concat('/subscriptions/',subscription().subscriptionId,'/providers/Microsoft.Web/locations/',parameters('logicAppLocation'),'/managedApis/azureblob')]"
        },
        "displayName": "[parameters('azureblob_displayName')]",
        "parameterValues": {
          "accountName": "[parameters('azureblob_accountName')]",
          "accessKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('azureblob_accountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]"
        }
      }
    }

All magic is in this ListKeys part, so this tells ARM to collect the key based on a reference to the storage account. (this also means that the account doing the Resource Group Deployment also needs access to the storage account).

[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('azureblob_accountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]

New CommandLet Get-IntegrationAccountMapTemplate

So another improvement is that we now can extract a map from an Integration Account in to a directly deployable ARM template. It work similar to the other extractions and bellow is a sample of how to get the Integration Account map as an ARM template.

Get-IntegrationAccountMapTemplate -ArtifactName 'mapname' -IntegrationAccount 'ianame' -ResourceGroup 'myresourcegroup' -SubscriptionId 'guid()' -TenantName 'mattiaslogdberg.onmicrosoft.com'

Summary:

Exporting Logic Apps via the Logic App Template Extractor simplifies the CI/CD area of using Logic Apps. Without needing to manually add extra work, making the development inside the portal and then just extracting the work and setup the import to the next environment .

Posted in: | API Management  | Tagged: | LogicApps  | Integration  | ARM Templates  | ARM 


API Management Template Creator Updates December 2018

Updates on the API Management Template Creator has been published:

I’ve got a lot of help this time frome some awesome contributors a big thanks to them! NilsHedstrom, Geronius, joeyeng, Lfalk.

A minor thing for usage but great for consistency and quality in the generator is that there is now a build and release pipeline setup in DevOps.

  • Improved support for Identity Providers, Products and more
  • Added automation for Build & Release via DevOps to increase quality in merged sprints and automate release to PowerShell Gallery
  • Added to PowerShell Gallery APIManagementTemplate
  • Split the Template to a template per resource to follow API Management DevOps best pratices

Now it’s mutch easier to get started, just install the APIMangementTemplate modul from PowerShell Gallery.

PS> Install-Module -Name APIManagementTemplate

Or update to the newest version

PS> Update-Module -Name APIManagementTemplate

Split the Template to a template per resource

In order to follow the best pratice from Azure API Management DevOps example we now have a new command Write-APIManagementTemplates this command will take a template as input and split into a file per resource to make it easy to manage and handle files and create more customized deployment with a linked template. Read more at GitHub

 Get-APIManagementTemplate -APIManagement MyApiManagementInstance -ResourceGroup myResourceGroup -SubscriptionId 80d4fe69-xxxx-4dd2-a938-9250f1c8ab03 | Write-APIManagementTemplates -OutputDirectory C:\temp\templates -SeparatePolicyFile $true

Summary:

Exporting API’s ia the API Management Template Extractor simplifies the CI area of using API Management, here we can select a specific API and export only that API, with Operations, version sets, schemas, properties etc. Without needing to manually add extra work, making the development inside the portal and then just extracting the work and setup the import to the next environment .

Posted in: | API Management  | Tagged: | API Management  | Integration  | ARM Templates  | ARM