Blog-Feed

Azure CLI: Create an Azure AD application for an API that exposes OAuth2 Permissions

Yesterday I was challenged to register an Azure AD application for my API using Azure CLI. I know how to register Azure AD applications using Powershell and the AzureAD module. Since I switched from a Windows system to a MacBook Pro, I thought I could use Powershell Core to register my application. Unfortunately the AzureAD module is not available for Powershell Core. So I checked out the Azure CLI commands.

After having a first look at the Azure CLI documentation it seems to be very easy to register an application in Azure AD.

az ad app create --display-name myapi --identifier-uris http://myapi

But that alone was not enough for my case, because my API exposes some OAuth2 Permissions and I did not find any optional parameter to specify my API’s OAuth2 Permissions. That’s why I looked at the „az ad app update“ command and I noticed that you can set an application’s property by using the optional parameter „–set“.

In my case I created an additional json file that contains the definition of all OAuth2 Permissions that my API exposes.

[
 {
        "adminConsentDescription": "Allows the app to delete items of the signed-in user",
        "adminConsentDisplayName": "Delete items",
        "id": "85b8f1a0-0733-47dd-9af4-cb7221dbcb73",
        "isEnabled": true,
        "lang": null,
        "origin": "Application",
        "type": "User",
        "userConsentDescription": "Allows the app to delete your items",
        "userConsentDisplayName": "Delete items",
        "value": "Items.Delete"
    },
    {
        "adminConsentDescription": "Allows the app to update items of the signed-in user",
        "adminConsentDisplayName": "Update items",
        "id": "5f9755ce-8e8a-42d9-bedf-040aceb274ea",
        "isEnabled": true,
        "lang": null,
        "origin": "Application",
        "type": "User",
        "userConsentDescription": "Allows the app to update your items",
        "userConsentDisplayName": "Update items",
        "value": "Items.Update"
    },
    {
        "adminConsentDescription": "Allows the app to create items of the signed-in user",
        "adminConsentDisplayName": "Create items",
        "id": "d75ea03e-817a-4f3a-b7da-17090ba8f779",
        "isEnabled": true,
        "lang": null,
        "origin": "Application",
        "type": "User",
        "userConsentDescription": "Allows the app to create items",
        "userConsentDisplayName": "Create items",
        "value": "Items.Create"
    },
    {
        "adminConsentDescription": "Allows the app to read items of the signed-in",
        "adminConsentDisplayName": "Read items",
        "id": "8411eda6-47de-4082-aed1-2568243ba679",
        "isEnabled": true,
        "lang": null,
        "origin": "Application",
        "type": "User",
        "userConsentDescription": "Allows the app to read your items",
        "userConsentDisplayName": "Read items",
        "value": "Items.Read"
    }
]

To register my API I tried the following but I got an error from Azure AD.

API_APP=$(az ad app create --display-name myapi --identifier-uris https://myapi)
## use jq to get the appId
API_APP_ID=$(echo $API_APP | jq -r '.appId')
az ad app update --id $API_APP_ID --set oauth2Permissions=@oauth2-permissions.json

Azure AD moans that an OAuth2 Permission already exists and that it must be disabled first in order to delete it. Apparently the application is created with a default permission. I ended up with the script as follow to create my API:

# create the API app
API_APP=$(az ad app create --display-name myapi --identifier-uris https://myapi)

# get the app id
API_APP_ID=$(echo $API_APP | jq -r '.appId')

# disable default exposed scope
DEFAULT_SCOPE=$(az ad app show --id $API_APP_ID | jq '.oauth2Permissions[0].isEnabled = false' | jq -r '.oauth2Permissions')

az ad app update --id $API_APP_ID --set oauth2Permissions="$DEFAULT_SCOPE"

# set needed scopes from file 'oath2-permissions'
az ad app update --id $API_APP_ID --set oauth2Permissions=@oauth2-permissions.json

# create a ServicePrincipal for the API
az ad sp create --id $API_APP_ID

Kubernetes: dapr and distributed tracing with Azure Monitor

Microservices are the modern way of designing software architectures. A Microservice is simple and an independently deployable service that can scale on your needs. With regard to a monolithic architecture the interface layer has moved to the network. As a developer we are used to debugging with the call stack in a monolithic architecture. With Microservices these days are over because a call stack is only available within a process. But how do we debug across process boundaries? That is where distributed tracing comes in.

With ApplicationInsights, Azure Monitor offers a distributed tracing solution that makes a developer’s live easier. ApplicationInsights offers an application map view which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.

Distributed tracing in dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. You can define exporters to export telemetry to an endpoint that can handle the OpenTelemetry format. dapr adds a HTTP/gRPC middleware to the dapr sidecar. The middleware intercepts all dapr and application traffic and automatically injects correlation IDs to trace distributed transactions.

In order to push telemetry to an instance of ApplicationInsights an agent that understands the telemetry format must be used to transform and push the data to ApplicationInsights. There is a component available named LocalForwarder that collects OpenCensus telemetry and routes it to ApplicationInsights. LocalForwarder is an open source project on GitHub.

Demo Architecture

I have created a demo architecture that shows how distributed tracing in dapr is configured and how telemetry is routed to ApplicationInsights. To keep it simple the application consists of four services. There are three backend services ServiceA, ServiceB and ServiceC. These services accepts http requests and returns a simple string. The fourth service is a simple Frontend that uses Swagger to render a simple UI. The Frontend service makes calls to the backend services.

After the application is deployed to Kubernetes and some test data is generated, the application map of ApplicationInsights can be viewed.

Demo application on GitHub

The demo application is available on my GitHub repository. The repository contains a detailed description how to setup distributed tracing in dapr on Kubernetes.

https://github.com/AndreasM009/dapr-distributed-tracing-azure-monitor

Kubernetes: Producer Consumer pattern with scalable consumer using dapr, KEDA and Azure ServiceBus Queues

I think every architect or developer knows about the consumer and producer pattern. This pattern is used to create jobs that can be processed in the background asynchronously. In that context the Producer creates the jobs that must be processed by the Consumer. To store the job description, the Producer typically uses a message queue. In a Cloud environment there are a lot of different Q techniques available. RabbitMQ, Redis or Azure ServiceBus Queue to name just a few. Normally as a architect or developer you choose one technique and use the appropriate integration library in your code. You have to know how the integration library works and you have to ensure that the library is available for your development platform. With a change to another Q technology, the code must also always be adapted. Getting used to an integration library can sometimes be hard and requires additional work for your developers.

dapr is an event-driven, portable runtime for building microservices on cloud and edge. dapr changes the way how you build event-driven microservices.

In dapr you can use output and input bindings to send message to and receive messages from a queue. When you decide on a queue technique like Redis, RabbitMQ or Azure ServiceBus Queues, you usually have to use integration libraries for binding in your code.With dapr you can integrate input and output bindings on a higher abstraction level and you don’t need to know how the integration library works.

With dapr you can integrate queues independent from the underlying technology. But how about scaling. Sometimes you want to scale out the consumer depending on the number of messages in the queue. To achieve this you can use KEDA in Kubernetes.

KEDA allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.

Sample architecture

See it in action

To see Dapr and KEDA in action I have created a GitHub repository that guides you through setting up the described architecture above.

https://github.com/AndreasM009/dapr-keda-azsbqueue