Deploying Logic App Standard Workflows

Facebook
Twitter
LinkedIn

Table of contents:

By using Logic Apps in Azure, we can automate and orchestrate workflows and integrations with many systems using low-code/no-code. These can be deployed in consumption (server-less) or standard mode. The standard mode Logic App service also allows us to use security-minded features such as virtual networks and attached firewall rules, private endpoints and more. These services work well in an enterprise deployment with high security requirements.

However – the standard mode uses a provisioned app service and server farm and behaves quite differently when deploying resources.

It took me a lot of time and research to find a stable way to deploy workflows and, especially, workflow connections. I thought it would be good to share my findings with the community.

Since standard Logic App services uses a file system to store definitions, we need to use a combination of Bicep and Azure PowerShell to update the files correctly.

Deploy the Logic App Service

As mentioned, the standard Logic App service is deployed using a web role.

I would recommend that you deploy the core Logic App service using Bicep. This way, you can change a few parameters and get the service up and running in development, test, production and so forth, using exactly the same configuration.

You should install the Bicep plugin for Visual Studio Code, which gives you IntelliSense and LINTing. Then create a module that will generate the ARM template for the Logic App service. The module below will:

  • create a dedicated storage account
  • deploy a server farm
  • attach Application Insights
				
					// =========== logic-service.bicep ===========
@allowed([
    'ts'
    'pr'
  ])
param environment string
param name string
param logwsid string
param location string = resourceGroup().location

// Set minimum of 2 worker nodes in production
var minimumElasticSize = ((environment == 'pr') ? 2 : 1) 

// =================================

// Storage account for the service
resource storage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
  name: 'st${name}logic${environment}'  
  location: location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_GRS'
  }
  properties: {
    supportsHttpsTrafficOnly: true
    minimumTlsVersion: 'TLS1_2'
  }
}

// Dedicated app plan for the service
resource plan 'Microsoft.Web/serverfarms@2021-02-01' = {
  name: 'plan-${name}-logic-${environment}'  
  location: location
  sku: {
    tier: 'WorkflowStandard'
    name: 'WS1'
  }
  properties: {
    targetWorkerCount: minimumElasticSize
    maximumElasticWorkerCount: 20
    elasticScaleEnabled: true
    isSpot: false
    zoneRedundant: true
  }
}

// Create application insights
resource appi 'Microsoft.Insights/components@2020-02-02' = {
  name: 'appi-${name}-logic-${environment}' 
  location: location
  kind: 'web'
  properties: {
    Application_Type: 'web'
    Flow_Type: 'Bluefield'
    publicNetworkAccessForIngestion: 'Enabled'
    publicNetworkAccessForQuery: 'Enabled'
    Request_Source: 'rest'
    RetentionInDays: 30
    WorkspaceResourceId: logwsid
  }
}

// App service containing the workflow runtime
resource site 'Microsoft.Web/sites@2021-02-01' = {
  name: 'logic-${name}-${environment}'  
  location: location
  kind: 'functionapp,workflowapp'
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    httpsOnly: true 
    siteConfig: {
      appSettings: [
        {
          name: 'FUNCTIONS_EXTENSION_VERSION'
          value: '~3'
        }
        {
          name: 'FUNCTIONS_WORKER_RUNTIME'
          value: 'node'
        }
        {
          name: 'WEBSITE_NODE_DEFAULT_VERSION'
          value: '~12'
        }
        {
          name: 'AzureWebJobsStorage'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storage.name};AccountKey=${listKeys(storage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
        }
        {
          name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
          value: 'DefaultEndpointsProtocol=https;AccountName=${storage.name};AccountKey=${listKeys(storage.id, '2019-06-01').keys[0].value};EndpointSuffix=core.windows.net'
        }
        {
          name: 'WEBSITE_CONTENTSHARE'
          value: 'app-${toLower(name)}-logicservice-${toLower(environment)}a6e9'
        }
        {
          name: 'AzureFunctionsJobHost__extensionBundle__id'
          value: 'Microsoft.Azure.Functions.ExtensionBundle.Workflows'
        }
        {
          name: 'AzureFunctionsJobHost__extensionBundle__version'
          value: '[1.*, 2.0.0)'
        }
        {
          name: 'APP_KIND'
          value: 'workflowApp'
        }
        {
          name: 'APPINSIGHTS_INSTRUMENTATIONKEY'
          value: appi.properties.InstrumentationKey
        }
        {
          name: 'ApplicationInsightsAgent_EXTENSION_VERSION'
          value: '~2'
        }
        {
          name: 'APPLICATIONINSIGHTS_CONNECTION_STRING'
          value: appi.properties.ConnectionString
        }
      ]
      use32BitWorkerProcess: true
    }
    serverFarmId: plan.id
    clientAffinityEnabled: false
  }
}

// Return the Logic App service name and farm name
output app string = site.name
output plan string = plan.name
				
			

You would need to call this from a main Bicep script that uses the module above, and also create a Log Analytics workspace and pass its identifier.

Create the Log Analytics workspace:

				
					// =========== ws.bicep ===========
param environment string
param name string
param location string = resourceGroup().location

// =================================

// Create log analytics workspace
resource logws 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
  name: 'log-${name}-${environment}'
  location: location
  properties: {
    sku: {
      name: 'PerGB2018' // Standard
    }
  }
}

// Return the workspace identifier
output id string = logws.id

				
			

Then, put it all together. I use “name” here for the generic display name of the system. You can change or remove it.

				
					// =========== main.bicep ===========

// Setting target scope
targetScope = 'subscription'

@minLength(1)
param location string = 'westeurope'

@maxLength(10)
@minLength(2)
param name string = 'integrate'

@allowed([
  'dev'
  'test'
  'prod'
])
param environment string

// =================================

// Create logging resource group
resource logRg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
  name: 'rg-${name}-log-${environment}'
  location: location
}

// Create Log Analytics workspace
module logws './Logging/ws.bicep' = {
  name: 'LogWorkspaceDeployment'
  scope: logRg
  params: {
    environment: environment
    name: name
    location: location
  } 
}

// Create orchestration resource group
resource orchRg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
  name: 'rg-${name}-orchestration-${environment}'
  location: location
}

// Deploy the logic app service container
module logic './Logic/logic-service.bicep' = {
  name: 'LogicAppServiceDeployment'
  scope: orchRg // Deploy to our new or existing RG
  params: { // Pass on shared parameters
    environment: environment
    name: name
    logwsid: logws.outputs.id
    location: location
  }
}

output logic_app string = logic.outputs.app
output logic_plan string = logic.outputs.plan 
				
			

Creating Workflows

A designed workflow can easily be created directly in the Azure Portal. But what if we need to use source control or deploy a templated workflow using CI/CD and pipelines? There is always the option to use the Visual Studio Code plugin and deploy the templates there, but that is again not great within an automated pipeline.

Instead, we now have a storage account with a file share, where the workflows are stored. You simply need to create a folder and upload the .JSON file to activate the workflow. 

I use a PowerShell script to do this. I store my workflows inside a folder at ./Workflows and prefix the folder with “wf-” so that the script automatically finds them.

				
					<#
.SYNOPSIS
    Deploys Azure Logic App workflows.
.DESCRIPTION
    Deploys the workflows by uploading ARM template files to the File Share.
.PARAMETER ResourceGroup
    The name of the resource group where the Storage account is located.
.PARAMETER StorageAccount
    The name of the Storage account where the File Share is located.
.INPUTS
    None.
.OUTPUTS
    None.
.EXAMPLE
    New-WorkflowDeployment -ResourceGroup "rg-orchestration-ts" -StorageAccount "stmyaccountnamelogicts"
#>
function New-WorkflowDeployment {
    Param(
    [Parameter(Mandatory = $true)]
    $ResourceGroup,
    [Parameter(Mandatory = $true)]
    $StorageAccount,
    [Parameter(Mandatory = $false)]
    [Switch]$Production
    )

    $ErrorActionPreference = "Stop"
    $WarningPreference = "Continue"

    # Set path of workflow files  
    $localDir = (Get-Location).Path

    # Get folders/workflows to upload 
    $directoryPath = "/site/wwwroot/" 
    $folders = Get-ChildItem -Path $localDir -Directory -Recurse | Where-Object { $_.Name.StartsWith("wf-") }

    if ($null -eq $folders) {
        Write-Host "No workflows found" -ForegroundColor Yellow
        return
    }

    # Get the storage account context
    $ctx = (Get-AzStorageAccount -ResourceGroupName $ResourceGroup -Name $StorageAccount).Context  

    # Get the file share
    $fs = (Get-AZStorageShare -Context $ctx).Name

    # Get current IP
    $ip = (Invoke-WebRequest -uri "http://ifconfig.me/ip").Content

    try {
        # Open firewall
        Add-AzStorageAccountNetworkRule -ResourceGroupName $ResourceGroup -Name $StorageAccount -IPAddressOrRange $ip | Out-Null

        # Upload folders to file share
        foreach($folder in $folders)
        {
            Write-Host "Uploading workflow " -NoNewLine
            Write-Host $folder.Name -ForegroundColor Yellow -NoNewLine
            Write-Host "..." -NoNewLine
            $path = $directoryPath + $folder.Name

            Get-AzStorageShare -Context $ctx -Name $fs | New-AzStorageDirectory -Path $path -ErrorAction SilentlyContinue | Out-Null
            Start-Sleep -Seconds 1

            # Upload files to file share
            $files = Get-ChildItem -Path $folder -Recurse -File
            foreach($file in $files)
            {
                $filePath = $path + "/" + $file.Name
                $fSrc = $file.FullName  
                try {
                    # Upload file
                    Set-AzStorageFileContent -Context $ctx -ShareName $fs -Source $fSrc -Path $filePath -Force -ea Stop | Out-Null
                } catch {
                    # Happens if file is locked, wait and try again
                    Start-Sleep -Seconds 5
                    Set-AzStorageFileContent -Context $ctx -ShareName $fs -Source $fSrc -Path $filePath -Force -ea Stop | Out-Null
                }
            }

            Write-Host 'Done' -ForegroundColor Green
        }
    } finally {
        # Remove the firewall rule
        Remove-AzStorageAccountNetworkRule -ResourceGroupName $ResourceGroup -Name $StorageAccount -IPAddressOrRange $ip | Out-Null 
    }
}
				
			

The workflow is now uploaded to the file share and is visible and working within our Logic App service.

Creating Connections

We use connections for accessing services inside a Logic App workflow. For example, to write to a storage queue or post messages to an Event Grid topic.

The type of connection is controlled by the Api parameter, and can be, for example:

  • azureblob
  • azurequeues
  • keyvault

These have several parameters that need to be filled in, but the documentation is severely limited. If you don’t add them exactly right, you get a miscellaneous error.

If you need to access information on ARM properties that are not documented, then use ARMClient. After installing, run the command and the endpoint you are looking for. In my case, I was looking for the queues schema:

./ARMClient.exe get https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Web/locations/westeurope/managedApis/azurequeues?api-version=2016-06-01

You need to add your subscription identifier instead of the empty “0000” guid. The result is (abbreviated):

				
					
  "properties": {
    "name": "azurequeues",
    "connectionParameters": {
      "storageaccount": {
        "type": "string",
        "uiDefinition": {
          "displayName": "Storage Account Name",
          "description": "The name of your storage account",
          "tooltip": "Provide the name of the storage account used for queues as it appears in the Azure portal",
          "constraints": {
            "required": "true"
          }
        }
      },
      "sharedkey": {
        "type": "securestring",
        "uiDefinition": {
          "displayName": "Shared Storage Key",
          "description": "The shared storage key of your storage account",
          "tooltip": "Provide a shared storage key for the storage account used for queues as it appears in the Azure portal",
          "constraints": {
            "required": "true"
          }
        }
      }
    },
    ...
}
				
			

This gives us the key information “sharedkey” and “storageaccount”, which allows us to complete the Bicep script and add the information required under parameterValues.

				
					param name string
param storage string
param location string = resourceGroup().location
param principalId string
param logicApp string

// Get parent storage account
resource storage_account 'Microsoft.Storage/storageAccounts@2021-06-01' existing = {
  name: storage
}

//  Create connection
param connection_name string = 'con-storage-queue-${name}'
resource connection 'Microsoft.Web/connections@2016-06-01' = {
  name: connection_name
  location: location
  kind: 'V2' // Needed to get connectionRuntimeUrl later on
  properties: {
    displayName: connection_name
    api: {
      displayName: 'Azure Queues connection for "${name}"'
      description: 'Azure Queue storage provides cloud messaging between application components. Queue storage also supports managing asynchronous tasks and building process work flows.'
      id:subscriptionResourceId('Microsoft.Web/locations/managedApis', location, 'azurequeues')
      type: 'Microsoft.Web/locations/managedApis'
    }
    parameterValues: {
      'storageaccount': storage_account.name
      'sharedkey': listKeys(storage_account.id, storage_account.apiVersion).keys[0].value
    }
  }
}

// Create access policy for the connection 
// Type not in Bicep yet but works fine  
resource policy 'Microsoft.Web/connections/accessPolicies@2016-06-01' = {
  name: '${connection_name}/${logicApp}'
  location: location
  properties: {
    principal: {
      type: 'ActiveDirectory'
      identity: {
        tenantId: subscription().tenantId
        objectId: principalId
      }
    }
  }
  dependsOn: [
    connection
  ]
}

// Return the connection runtime URL, this needs to be set in the connection JSON file later
output connectionRuntimeUrl string = reference(connection.id, connection.apiVersion, 'full').properties.connectionRuntimeUrl
output api string = subscriptionResourceId('Microsoft.Web/locations/managedApis', location, 'azureblob')
output id string = connection.id
output name string = connection.name
				
			

I also add a policy above so that the Logic App service account has read rights to the connection. Without this, you will get a warning stating that policies are missing when using the connection.

The connection must be deployed to the storage file system in the Logic App service, within a file called “connections.json”. I have created a PowerShell script to automate this.

				
					<#
.SYNOPSIS
    Deploys Azure Logic App workflow connection.
.DESCRIPTION
    Deploys the workflow connection by adding the reference into the connections.json file
    that is stored in the associated fileshare.
.PARAMETER ResourceGroup
    The name of the resource group where the Storage account is located.
.PARAMETER StorageAccount
    The name of the Storage account where the File Share is located.
.PARAMETER Id
    The full resource ID of the connection.
.PARAMETER RuntimeUrl
    The full runtime URL of the connection.
.PARAMETER Api
    The managed API reference of the connection.
.INPUTS
    None.
.OUTPUTS
    None.
.EXAMPLE
    New-WorkflowConnection `
        -ResourceGroup "rg-orchestration-ts" `
        -StorageAccount "stsampleworkflowsts" `
        -Id "/subscriptions/12952a70-6abe-4cf3-880a-81ce65fdc63f/resourceGroups/rg-orchestration-ts/providers/Microsoft.Web/connections/con-storage-deadletter" `
        -RuntimeUrl "/subscriptions/12952a70-6abe-4cf3-880a-81ce65fdc63f/resourceGroups/rg-orchestration-ts/providers/Microsoft.Web/connections/con-storage-deadletter" `
        -Api "/subscriptions/12952a70-6abe-4cf3-880a-81ce65fdc63f/providers/Microsoft.Web/locations/westeurope/managedApis/azureblob"
#>
function New-WorkflowConnection {
    Param(
    [Parameter(Mandatory = $true)]
    $ResourceGroup,
    [Parameter(Mandatory = $true)]
    $StorageAccount,
    [Parameter(Mandatory = $true)]
    $Api,
    [Parameter(Mandatory = $true)]
    $Id,
    [Parameter(Mandatory = $true)]
    $RuntimeUrl
    )

    $ErrorActionPreference = "Stop"
    $WarningPreference = "Continue"

    $names =  $Id.Split('/')
    $name = $names[$names.length - 1]

    # Get current IP
    $ip = (Invoke-WebRequest -uri "http://ifconfig.me/ip").Content

    try {
        Write-Host "Deploying workflow connection '" -NoNewLine
        Write-Host $name -NoNewline -ForegroundColor Yellow
        Write-Host "'..." -NoNewline

        # Open firewall
        Add-AzStorageAccountNetworkRule -ResourceGroupName $ResourceGroup -Name $StorageAccount -IPAddressOrRange $ip | Out-Null

        # Connects the Azure context and sets the subscription. 
        New-RpicTenantConnection 

        # Static values 
        $directoryPath = "/site/wwwroot/" 

        # Get the storage account context
        $ctx = (Get-AzStorageAccount -ResourceGroupName $ResourceGroup -Name $StorageAccount).Context
       
        # Get the file share
        $fsName = (Get-AZStorageShare -Context $ctx).Name

        # Download connection file
        $configPath = $directoryPath + "connections.json"
        try {
            Get-AzStorageFileContent -Context $ctx -ShareName $fsName -Path $configPath -Force        
            Start-Sleep -Seconds 5
        } catch {
            # No such file, create it
            $newContent = @"
{
    "managedApiConnections": {
    }
}
"@
           Set-Content -Path "./connections.json" -Value $newContent
        }
        
        $config = Get-Content -Path "./connections.json" | ConvertFrom-Json
        $sectionName = ('$config.managedApiConnections."' + $name + '"')
        $section = Invoke-Expression $sectionName
        if ($null -eq $section) {
            # Section missing, add it
            $value = @"
    {
        "api": {
            "id": "$Api"
        },
        "authentication": {
            "type": "ManagedServiceIdentity"
        },
        "connection": {
            "id": "$Id" 
        },
        "connectionRuntimeUrl": "$RuntimeUrl"
    }
"@
            $config.managedApiConnections | Add-Member -Name $name -Value (Convertfrom-Json $value) -MemberType NoteProperty 
        
        } else {
            # Update section just in case
            $section.api.id = $Api
            $section.connection.id = $Id
            $section.connectionRuntimeUrl = $RuntimeUrl
        }

        # Save and upload file
        $config | ConvertTo-Json -Depth 100 | Out-File ./connections.json
        Set-AzStorageFileContent -Context $ctx -ShareName $fsName -Source ./connections.json  -Path $configPath  -Force
        Remove-Item ./connections.json -Force   
    } finally {
       # Remove the firewall rule
       Remove-AzStorageAccountNetworkRule -ResourceGroupName $ResourceGroup -Name $StorageAccount -IPAddressOrRange $ip | Out-Null 
    }
    Write-Host "Done!" -ForegroundColor Green
}
				
			

The final part is to join the outputs from the connections Bicep above and use those with the New-WorkflowConnection command. To do that, I save the outputs of the deployment to a hash table.

				
					$result = New-AzDeployment -Location $region -TemplateFile $template -TemplateParameterFile $parameters

$outputs = @{}
$result.Outputs.Keys | ForEach-Object {
    $outputs[$_] = 
    $key = $_
    $outputs[$key] = $result.Outputs[$_].Value 
}
Write-Host "Outputs:"
$outputs

# Use outputs later on:
$rg = $outputs["resourceGroup"]
				
			

All in all, this took a lot of detective work, so I hope that this helps anyone else and that we get better official sample scripts in the future.

23 Comments
  1. I quite understand a little bit. Video demonstration would be helpful to implement it.

  2. Thank you very much for this extremely helpful post. In your workflow powershell script I had to change the following: $files = Get-ChildItem -Path $folder.FullName -Recurse -File from $folder => $folder.FullName
    Other than that this script is perfect! I am running this in conjunction with terraform using Microsoft DevOps to push the workflows into the cloud.
    Thank you

  3. I could follow along till the resources are deployed .but a little lost about how the fileshare structure would look after upload and f you could provide a sample ARM template for workflow .Thanks

    1. The file structure I use locally is a folder named ‘workflows’ and a subfolder for each workflow. There I have the workflow.json file, exactly as it would be produced in the Azure portal when you create it manually.

      – Workflows
      – ExampleWorkflow
      workflow.json

      If you create a workflow using the designer in the Azure portal then you can copy the contents from ‘code view’ into the json file and deploy it using the PowerShell script above.

      1. Thank you so much for the quick response .Appreciate it .Figured out the same today by following the manual route .

      2. I could make it work today .thanks again .Could not have been able to do it without this article.

          1. i ran into an issue today . so i have reader access to the rg where i have to deploy and cant actually see the designer . so i used a simple workflow.json whose trigger is when blob created(storage account with hierarchical namepace enabled) and http action of hitting a GET endpoint. my connections.json has only service provider section and no managedapi section so i did not create a connection resource or connection/access policy resource. now the flow is not working.{
            “serviceProviderConnections”: {
            “AzureBlob”: {
            “parameterValues”: {
            “connectionString”: “constring”
            },
            “serviceProvider”: {
            “id”: “/serviceProviders/AzureBlob”
            },
            “displayName”: “test blob”
            }
            },
            “managedApiConnections”: {}
            }

            Any idea why that would be ?Or any way to check if workflow is deployed fine or not?

          2. Did the connections.json file upload correctly to the file share? If it did and still not working, then I would suggest that you create a managed connection for the blob storage, as in the example on the post.

  4. Hey Tobias
    Thank you very very much for sharing this.
    I’m currently working on creating a standard SKU logicapp in a Bicep file and found this post is really useful which saved my A LOT of time.
    I wish it was an official tutorial in MS’ website… hahha, can only find consumption SKU example in MS’ website.

    Thanks again for your sharing 🙂
    All the best!

  5. Hello Tobias,
    Great Article and It is very helpful in understanding the workflows, I am doing the same in my current environment with the standard workflows but I am unable to do version controlling(Integrate with GIT) on the standard workflows and develop a complete ci/cd pipeline for AzDevOps to rollback in case of a deployment failure types.
    It will be very helpful if you can share any thoughts on how can I integrate with GIT?
    Thanks in advance!

    1. Hi,

      You can use the Azure CLI GitHub action template to achieve this. You can view the documentation here: https://github.com/Azure/cli

      You can log on using the CLI action and then use PowerShell to deploy the file. Use GitHub action secrets or parameters to control the target storage account etc.

      If you need to add error handling then first download the existing files for backup, you can then have a compensating action that runs in case of a failure.

      1. thank you for your response but I am looking with Azure DevOps integration, and in addition I want to have an existing record of the workflow before I make any changes to the workflow and then deploy it to the logic app. I apologize for the additional comments as they were added accidentally not knowing my comment was already added.

  6. Hi Tobias,
    As many have stated already, thank you for this very great and helpful post!
    We use Logic App Standard on an Asev3 environment and I cannot figure out how to deploy the workflows. Your PowerShell script uses Get-AzStorageAccount to get to the storage to deploy the workflows to, but how do I go about it with the Asev3? It is not explicitly linked to a Storage Account (ie it uses its own internal storage). Do you have any insights or suggestions on how to proceed? Maybe some modification suggestions to apply to your current PowerShell script?
    Thanks!

  7. Hi Tobias,
    Thank you also for this great article!
    Could you maybe give me some pointers on how to approach the above when the logic apps run in an ASEv3? The ASE is completely private and has no ‘public’ storage account, so I assume the PowerShell Get-AzStorageAccount is not applicable as there is no (public) storage account to get the context from/for.
    We are able to deploy the logic apps via Azure DevOps pipelines and bicep/azurecli but deployment of the workflows is giving us headaches.
    Thanks!

    1. Hi Michel,
      Sorry for the late reply. I have not worked with ASE and we are moving over to Logic App workflows within Azure Container Applications, as they simply scale better and still provide access to virtual networks and, on top of that, can utilize everything else from the container platform.

      Here is some info: https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-run-logic-apps-in-a-docker/ba-p/3545220

      Sorry if this doesn’t solve your problem, just wanted to share our latest strategy.

Leave a Reply

Your email address will not be published. Required fields are marked *