DevOps notes

Dotnet templates

Testing locally

  1. Clone this repository
  2. Go into this directory
  3. Execute following command dotnet new -i .\templates\bkr.microservice
  4. This will install template for dotnet new usage
  5. Create directory for you microservice and go into this directory
  6. Type dotnet new bkr.microservice --name Cheese.Cake
  7. You have scaffolded new service.

Distributing with package

  1. Execute dotnet pack from templates directory
  2. This will create package with templates in .\bin\Debug\Bkr.Template.Service.1.0.0.nupkg
  3. Run dotnet new -i .\bin\Debug\Bkr.Template.Service.1.0.0.nupkg
  4. This will install template for dotnet new usage
  5. Create directory for you microservice and go into this directory
  6. Type dotnet new bkr.microservice --name Cheese.Cake
  7. You have scaffolded new service.

Adding new project to solution

If you are creating new project for template please add it to both solutions i.e.:

Template options

As in previous examples template has some options that can be overridden:

How is name used

IMPORTANT!!!

Name must conform to domain split diagram which can be found here diagram.

If you are creating new service please add it to aforementioned diagram, so everybody can see how our architecture looks like.

How templating works?

Symbols that can be used as templating variables are defined in template.json file. Those symbols can be parameters for template like name or generated like httpsPort. name is a special parameter that is included by default and it’s default value is placed in sourceName property in root object.

Used templating variables

Http port

This is a generated parameter that randomizes port number for given service between 3000 and 6000.

"httpsPort": {
    "type": "generated",
    "generator": "port",
    "parameters": {
        "low": 3000,
        "high": 6000
    },
    "replaces": "5302"
}

Name

Below json shows how name is transformed for different purposes in codebase.

"domainLower":{
    "type": "generated",
    "generator": "casing",
    "parameters": {
        "source": "name",
        "toLower": true
    }
},
"domainAlphanumeric": {
    "type": "generated",
    "generator": "regex",
    "dataType": "string",
    "replaces": "VerticalDomain",
    "fileRename": "VerticalDomain",
    "parameters": {
    "source": "name",
    "steps": [
        {
            "regex": "[^a-zA-Z\\d]",
            "replacement": ""
        }
    ]
    }
},
"domainLowerHyphens":{
    "type": "generated",
    "generator": "regex",
    "dataType": "string",
    "replaces": "vertical-domain",
    "parameters": {
    "source": "domainLower",
    "steps": [
        {
            "regex": "[^a-zA-Z\\d]",
            "replacement": "-"
        }
    ]
    }    
},
"domainLowerPath": {
    "type": "generated",
    "generator": "regex",
    "dataType": "string",
    "replaces": "vertical/domain",
    "parameters": {
    "source": "domainLower",
    "steps": [
        {
        "regex": "[^a-zA-Z\\d]",
        "replacement": "/"
        }
    ]
    }
}

SourceName

Like already mentioned this is a global symbol not defined in symbols list. In template it’s default value is Vertical.Domain and is used to replace file names or code fragments like parts of namespace, csproj, solution files etc.

_______________________________________________________________________________________________

Database migration project

Context

Database migrations are contained in a separate project and are executed upon deployment. This is a guide based on trial and error on setting up a migration project.

1. Create a console application project:

The name of the project should be explicit enough, for example DatabaseMigrationTool The project may be located in the root of the solution or in a specific folder.

Database migration project must be a console-based application.

Add the following nugets:

  1. Ev.DbUpdate.KeyVault
  2. Microsoft.Extensions.Hosting
  3. Microsoft.Extensions.Hosting.Abstractions

In Program.cs add the following lines, replacing with what corresponds:

public class Program
{
    private static string _fullNamespace = <DbMigrationNameProject>;
    private static string? _connectionString = string.Empty;

    static int Main(string[] args)
    {
        int returnCode;
        Console.WriteLine("<ServiceNameProject> database migration host starting !!!");

        using (var host = CreateHost(args))
        {
            returnCode = host.StartWithResult();
        }

        Console.WriteLine("<ServiceNameProject> database migration host stopped !!!");

        return returnCode;
    }

    private static IHost CreateHost(string[] args)
    {
        var namespacePrefix = typeof(Program).Namespace;
        var host = Host
            .CreateDefaultBuilder(args)
            .UseConsoleLifetime();

        host = host.UseDbUpdateWithKeyVaultConfigurationSourceAndArgs(
            args,
            ConfigureOptions()
        )
        .ConfigureServices((hostContext, _) =>
        {
            Extensions.TransformAzureSqlConnectionStringForClientSecretAuth(hostContext.Configuration);
            _connectionString = hostContext.Configuration.GetConnectionString("SqlServer");
        });

        return host.Build();
    }

    private static Action<DbUpdateConfigurationBuilder> ConfigureOptions()
    {
        return options => options
                            .WithUpgradeScriptsInNamespace($"{_fullNamespace}.UpScripts")
                            .WithDowngradeScriptsInNamespace($"{_fullNamespace}.DownScripts")
                            .WithPreScripts($"{_fullNamespace}.PreScripts")
                            .WithPostScripts($"{_fullNamespace}.PostScripts")
                            .WithConnectionString(_connectionString);
    }

}

Then, add a file called Extension.cs at the root of the project with the following code:

internal static class Extensions

{
    private static char _connectionStringPartsSeparator = ';';

    private static string AsConnectionStringWithKeysRemoved(string connectionString, string[] connStringKeysToRemove)
    {
        var connStringParts = connectionString.Split(_connectionStringPartsSeparator);
        var builder = new StringBuilder();

        foreach (var part in connStringParts.Where(_ => !string.IsNullOrEmpty(_)))
        {
            var partKeyValue = part.Split('=');
            var key = partKeyValue[0];

            if (connStringKeysToRemove.Any(_ => _.Equals(key)))
            {
                continue;
            }

            builder.Append(part);
            if (part[^1] != _connectionStringPartsSeparator)
            {
                builder.Append(_connectionStringPartsSeparator);
            }
        }

        return builder.ToString();
    }

    public static void TransformAzureSqlConnectionStringForClientSecretAuth(this IConfiguration configuration)
    {
        var azureSqlConnectionString = configuration["ConnectionStrings:SqlServer"]!;
        configuration["ConnectionStrings:SqlServer"] =
            AsConnectionStringWithKeysRemoved(azureSqlConnectionString, new[] { "Uid", "Authentication" });
    }
}

The appsettings.json should look like this:

{
  "ConnectionStrings": {
    "SqlServer": ""
  },
  "Logging": {
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "DbUpdate": {
    "Timeout": "00:01:00.000",
    "ConnectionStringName": "SqlServer"
  },
  "KeyVault": {
    "Enabled": true,
    "Url": "https://#{keyVaultName}#.vault.azure.net/"
  },
  "DatabaseAadAccess": {
    "Enabled": true
  },
  "AzureIdentity": {
    "Enabled": true,
    "ClientId": "#{clientId}#",
    "ClientSecret": "#{clientSecret}#",
    "TenantId": "#{azAccountTenantId}#"
  }
}

There’s no need to add the ConnectionString for database, it will be taken from keyvault automatically. There’s also no need to add appsettings.json for different environments.

2. Create the Scripts folders

Create two folders at the root of the project, one called DownScripts and the other called UpScripts. This two folders will contain the scripts to update the database and the scripts to revert. Example of Script named InitialSetup: {YYYYMMDD}_{HHmmss}_InitialSetup.sql

SET XACT_ABORT ON
SET NOCOUNT ON

BEGIN TRAN
BEGIN TRY

    PRINT 'Bootstrapping <ServiceName> schema...';

    IF (SCHEMA_ID('<SchemaName>') IS NULL)
    BEGIN
    EXEC ('CREATE SCHEMA [<SchemaName>] AUTHORIZATION [dbo]')
            PRINT 'Schema [<SchemaName>] - created.';
    END
    ELSE
        PRINT 'Schema [<SchemaName>] - already in place.';

    COMMIT
END TRY
BEGIN CATCH

    ROLLBACK;
    THROW;
END CATCH;

Example of DownScript named InitialSetup: {YYYYMMDD}_{HHMMss}_InitialSetup.sql In order to be able to execute the DowngradeToMigration action, the Up and Down scripts should have the same name

SET XACT_ABORT ON
SET NOCOUNT ON

BEGIN TRAN
BEGIN TRY

    IF (SCHEMA_ID('<SchemaName>') IS NOT NULL)
    BEGIN
    EXEC ('DROP SCHEMA [<SchemaName>]')
            PRINT 'Schema [<SchemaName>] - dropped.';
    END

    COMMIT
END TRY
BEGIN CATCH
    ROLLBACK;
    THROW;
END CATCH;

Each down script should undo the corresponding up script.

Lastly, make sure the following lines are in the .csproj:

<ItemGroup>
  <None Remove="appsettings.json" />
  <None Remove="DownScripts\*" />
  <None Remove="UpScripts\*" />
</ItemGroup>
	
<ItemGroup>
  <Content Include="appsettings.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    <ExcludeFromSingleFile>true</ExcludeFromSingleFile>
    <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
  </Content>
</ItemGroup>
<ItemGroup>
  <EmbeddedResource Include="DownScripts\*">
    <CopyToOutputDirectory>Never</CopyToOutputDirectory>
  </EmbeddedResource>
  <EmbeddedResource Include="UpScripts\*">
    <CopyToOutputDirectory>Never</CopyToOutputDirectory>
  </EmbeddedResource>
</ItemGroup>

This will ensure that no matter how many scripts you add, the .csproj file will be clean. No lines should be added when you add new scripts.

That’s all for the migration project.

3. Changes into Service Project

If the database hasn’t been created yet, make sure to uncomment the corresponding section in the infrastructure.bicep file:

image.png

At the root of the project, there’s a package.yml and a pullrequest.yml.

##package.yaml Set the following variable in the package.yml in the variables section:

- name: databaseMigrationCsprojName
  value: '<DatabaseMigrationProjectName>'

In the jobs section below set parameter publishDatabaseMigration: 'true'

##pullrequest.yaml Set the following variables in the pullrequest.yml in the variables section:

- name: databaseProjectLocation
  value: <DatabaseMigrationProjectName>
- name: databaseMigrationCsprojName
  value: <DatabaseMigrationProjectName>

In the jobs section below set parameter runDatabaseMigration: 'true'

##Variable groups The last change is to add the following variables to the Library in Azure DevOps. Some may not have permissions, but a DevOps ambassador can do it instead. Both Prd and Lab var groups should be modified. Pattern for variable groups name:

Variables to set/create:

________________________________________________________________________________________________

Service Template Guide

This document describes how to use the service template.

NOTE: Please visit the Service Documentation for a detailed description of this service and its Runbook.

Health

Template service exposes 2 health endpoints that are needed for deployment to work properly in kubernetes:

Readiness probe will check all dependencies of service that are required for service to complete it’s business value. By default there are configured two dependencies sql connection and idp connection.

Service Configuration

By default KeyVault is used to store all secrets and configuration that differs between environments. Applications connect to KeyVault via Service Principal (in the future this could be a Managed Identity). There is a separate KeyVault per application per environment.

KeyVault secrets can be created in multiple ways:

By convention appsettings.json should not contain any tokens #{token}#.

Configuration providers

Configuration into service is loaded from couple of different places:

  1. appsettings.json
  2. environment variables
  3. command line
  4. Azure Key Vault - disabled for local development
  5. appsettings.Development.json - for local development only

The order provided in previous list matters, in such a way, that if same key is defined in different providers, value declared by provider later in this list takes precedence.

Configuration via AzureKeyVault

Is configured by default: yes Is enabled by default: yes

Azure Key Vault is a used to securely store keys, passwords and other sensitive data and is used as one of configuration providers for the service. All keys defined in it are be loaded into IConfiguration object and can be used to configure service.

Important Access to AzureKeyVault is NOT configured with tokens in appsettings.json file. Service Principal configuration is provided by environment variables.

Configuration:

"KeyVault": {
  "Enabled": true,
  "Url": "",
  "TennantId": "",
  "ClientId": "",
  "ClientSecret": ""
}

All keys except of Enabled are required and if KeyVault integration is enabled, lack of any of those keys will prevent application from starting.

Keys in Azure Key Vault must conform to simple rule. They can only contain alphanumerical values or dashes. Dashes are special characters to declare nested keys e.g. Logging--Elastic--Nodes.

Elasticsearch logging

Is configured by default - yes

Is enabled by default - yes

Tokens taken from KeyVault: Logging–Elastic–Node Logging–Elastic–Credentials–Username Logging–Elastic–Credentials–Password

Logging to Elasticsearch is configured in Logging:Elastic section of appsettings.

To enable it locally:

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.3
{
  ...
  "Logging": {
    "Elastic": {
	    "Enabled": true,
	    "Node": "http://localhost:9200/"
    }
  },
  ...
}

####For more information about logging refer to Wiki

Elastic APM

Is configured by default - yes

Is enabled by default - yes

Tokens taken from KeyVault: Logging–ElasticApm–ServerUrl Logging–ElasticApm–SecretToken

Elastic APM is a dependency that measures performance of application dependencies e.g. duration of HTTP calls or execution of SQL queries.

Configuration is done in ElasticApm section in appsettings.json file e.g.

{
  ...
  "ElasticApm": {
    "Enabled": true,
    "ServerUrl": "",
    "SecretToken": "",
    "TransactionSampleRate": 1.0,
    "CaptureBody": "errors",
    "ServiceName": "ServiceName"
  },
  ...
}

There are multiple components of Elastic APM that can be configured in the same section.

  1. Core configuration:
  1. Reporter configuration:
  1. Http configuration

####For more information about logging refer to Wiki

Sentry

Is configured by default - no

Is enabled by default - no

Comment: Before enabling Sentry logging, you need to create project in Sentry on Lab and Prd and fill in SentryDsn in Lab and Prd KeyVault. Tokens taken from KeyVault:

Logging–Sentry–Dsn

Configuration is done in Sentry section in appsettings.json file e.g.

{
  ...
  "Sentry": {
    "Enabled": false,
    "IncludeRequestPayload": true,
    "SendDefaultPii": true,
    "MinimumBreadcrumbLevel": "Debug",
    "MinimumEventLevel": "Error",
    "AttachStacktrace": true,
    "Debug": true,
    "DiagnosticLevel": "Error",
    "RateLimitInMilliseconds": 2000,
    "Dsn": ""
  },
  ...
}

####For more information about logging refer to Wiki

Feature toggle

Is configured by default - yes Is enabled by default - yes

Tokens taken from KeyVault: FeatureToggle–ConfigCatBlobUrl FeatureToggle–ConfigCatSdkKey

For now we are using Optimizly and ConfigCat as feature toggles services, that’s why currently we need to configure both providers. Configuration is defined in FeatureToggle section in appsettings.json.

"FeatureToggle": {
    "ConfigCatBlobUrl": "",
    "ConfigCatSdkKey": ""
}

For more options please refer to Ev.FeatureToggle repository.

For local development Optimizly and ConfigCat feature toggles are disabled by setting “Enable” flag to false For more information refer to Feature Toggle Local provider.

ServiceBus

Is configured by default - yes

Is enabled by default - no

For Service Bus messaging, we are using Ev.ServiceBus NuGet package. You check the documentation of this Nuget before going further.

Comment: By default source code references testqueue which does not exist and if enabled creates many errors in service output. You need to change the code to reference existing queues or subscriptions before enabling this feature

Tokens taken from KeyVault: ConnectionStrings–ServiceBus

Configuration to ServiceBus is done in two places. First is connection string to ServiceBus namespace is in ConnectionStrings section. Update it in appsettings.Development.json to avoid accidental push to azure devops. Other settings are in ServiceBus section:

"ServiceBus": {
  "Enabled": true,
  "ReceiveMessages": true
},

Default values for those options for local development are in launchSettings.json file and are set to false.

Creating new queues/topics/subscriptions

Our current infrastructure supports creating topics and queues from single source of truth, which is ARM template located in Borat. Because of that please add topic/queue configuration to that file.

After that, you’ll need to declare the resource in your application, check the documentation for that.

SQL connection

Is configured by default - yes

Is enabled by default - yes

Tokens:

#{EcoPortalDataBase}# - taken value taken from linked variable groups per environment:

By deafult application will connect to local SQL server to ecoPortal database. If you want to customize connection string, please use either appsettings.Development.json or launchProfile.json file.

AzureSql database configuration steps

Healthcheck

For SQL connection you can override default timeout (2 seconds) in configuration file

{
...
  "Sql": {
    "HealthcheckTimeoutSeconds": 2
  },
...
}

Authentication

If you are using a BFF, use the BFF pattern. For more information refer to

Ev.Authentication.Bff-Implementation-guide

Run Service locally (Advanced)

Service might need other services running in the backgroup.

To prepare dockerized environment run .\docker-setup.ps1 in Windows Powershell (NOT Core) with elevated permissions(you may need Azure Cli installed). Once completed you will have all needed dependencies running in the background with printed out in the console “borat-api” access token that can be used for authentication.

Other scripts in the repository:

Once dependencies are running open solution in VS and press F5. Swagger should get opened. Once running authorize to swagger with access token from the console and execute “hello” via swagger or simply curl -X POST "https://localhost:5302/rpc/VerticalDomain/hello" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer XXX" -d "{}". where XXX is any valid access token.

Run image from ACR

az login (choose lab credentials) az acr login -n cicdcr01weuy01 docker pull cicdcr01weuy01.azurecr.io/vertical-domain:latest docker run -d -p 127.0.0.1:8080:80/tcp cicdcr01weuy01.azurecr.io/vertical-domain:latest