DevOps notes
Dotnet templates
Testing locally
- Clone this repository
- Go into this directory
- Execute following command
dotnet new -i .\templates\bkr.microservice
- This will install template for
dotnet new
usage - Create directory for you microservice and go into this directory
- Type
dotnet new bkr.microservice --name Cheese.Cake
- You have scaffolded new service.
Distributing with package
- Execute
dotnet pack
from templates directory - This will create package with templates in
.\bin\Debug\Bkr.Template.Service.1.0.0.nupkg
- Run
dotnet new -i .\bin\Debug\Bkr.Template.Service.1.0.0.nupkg
- This will install template for
dotnet new
usage - Create directory for you microservice and go into this directory
- Type
dotnet new bkr.microservice --name Cheese.Cake
- You have scaffolded new service.
Adding new project to solution
If you are creating new project for template please add it to both solutions i.e.:
template.dotnet.service.sln
solution for developing templatetemplates\bkr.microservice\template\Vertical.Domain.sln
- solution containing template microservice
Template options
As in previous examples template has some options that can be overridden:
- name - for providing domain name of the service e.g.
Evaluation.Scoring
. This parameter is used to replace filenames, name of classes, namespaces, name of deployment in k8s etc. - skipRestore - for skipping optional step of restoring nuget packages after creating template
How is name
used
name
as is will be used for namespaces,name
will be transformed to alphanumeric value e.g.Evaluation.Scoring
toEvaluationScoring
for classes names and file names,name
will be transformed to lower case with special characters replaced by hyphens e.g.Evaluation.Scoring
toevaluation-scoring
for k8s deployment name
IMPORTANT!!!
Name
must conform to domain split diagram which can be found here diagram.If you are creating new service please add it to aforementioned diagram, so everybody can see how our architecture looks like.
How templating works?
Symbols that can be used as templating variables are defined in template.json
file. Those symbols can be parameters for template like name
or generated like httpsPort
. name
is a special parameter that is included by default and it’s default value is placed in sourceName
property in root object.
Used templating variables
Http port
This is a generated parameter that randomizes port number for given service between 3000 and 6000.
"httpsPort": {
"type": "generated",
"generator": "port",
"parameters": {
"low": 3000,
"high": 6000
},
"replaces": "5302"
}
Name
Below json
shows how name
is transformed for different purposes in codebase.
domainLower
- is just intermediate rule to create lower case version of name e.g.Flying.Tomato
toflying.tomato
. It does not replace any value in newly created project.domainAlphanumeric
- is similar to previous parameter except it depends onname
parameter. It’s value is being used for both replacingVerticalDomain
string in file contents and in file names.domainLowerHyphens
- another regex transformer. It is based onnameLower
and replaces special characters to hyphens e.g.flying.tomato
toflying-tomato
. It is used to replacevertical-domain
string in file contents.domainLowerPath
- another regex transformer. It is based onnameLower
and replaces special characters to / e.g.flying.tomato
toflying/tomato
. It is used to replacevertical/domain
string in file contents.
"domainLower":{
"type": "generated",
"generator": "casing",
"parameters": {
"source": "name",
"toLower": true
}
},
"domainAlphanumeric": {
"type": "generated",
"generator": "regex",
"dataType": "string",
"replaces": "VerticalDomain",
"fileRename": "VerticalDomain",
"parameters": {
"source": "name",
"steps": [
{
"regex": "[^a-zA-Z\\d]",
"replacement": ""
}
]
}
},
"domainLowerHyphens":{
"type": "generated",
"generator": "regex",
"dataType": "string",
"replaces": "vertical-domain",
"parameters": {
"source": "domainLower",
"steps": [
{
"regex": "[^a-zA-Z\\d]",
"replacement": "-"
}
]
}
},
"domainLowerPath": {
"type": "generated",
"generator": "regex",
"dataType": "string",
"replaces": "vertical/domain",
"parameters": {
"source": "domainLower",
"steps": [
{
"regex": "[^a-zA-Z\\d]",
"replacement": "/"
}
]
}
}
SourceName
Like already mentioned this is a global symbol not defined in symbols list. In template it’s default value is Vertical.Domain
and is used to replace file names or code fragments like parts of namespace
, csproj
, solution files
etc.
_______________________________________________________________________________________________
Database migration project
Context
Database migrations are contained in a separate project and are executed upon deployment. This is a guide based on trial and error on setting up a migration project.
1. Create a console application project:
The name of the project should be explicit enough, for example DatabaseMigrationTool
The project may be located in the root of the solution or in a specific folder.
Database migration project must be a console-based application.
Add the following nugets:
- Ev.DbUpdate.KeyVault
- Microsoft.Extensions.Hosting
- Microsoft.Extensions.Hosting.Abstractions
In Program.cs
add the following lines, replacing with what corresponds:
public class Program
{
private static string _fullNamespace = <DbMigrationNameProject>;
private static string? _connectionString = string.Empty;
static int Main(string[] args)
{
int returnCode;
Console.WriteLine("<ServiceNameProject> database migration host starting !!!");
using (var host = CreateHost(args))
{
returnCode = host.StartWithResult();
}
Console.WriteLine("<ServiceNameProject> database migration host stopped !!!");
return returnCode;
}
private static IHost CreateHost(string[] args)
{
var namespacePrefix = typeof(Program).Namespace;
var host = Host
.CreateDefaultBuilder(args)
.UseConsoleLifetime();
host = host.UseDbUpdateWithKeyVaultConfigurationSourceAndArgs(
args,
ConfigureOptions()
)
.ConfigureServices((hostContext, _) =>
{
Extensions.TransformAzureSqlConnectionStringForClientSecretAuth(hostContext.Configuration);
_connectionString = hostContext.Configuration.GetConnectionString("SqlServer");
});
return host.Build();
}
private static Action<DbUpdateConfigurationBuilder> ConfigureOptions()
{
return options => options
.WithUpgradeScriptsInNamespace($"{_fullNamespace}.UpScripts")
.WithDowngradeScriptsInNamespace($"{_fullNamespace}.DownScripts")
.WithPreScripts($"{_fullNamespace}.PreScripts")
.WithPostScripts($"{_fullNamespace}.PostScripts")
.WithConnectionString(_connectionString);
}
}
Then, add a file called Extension.cs
at the root of the project with the following code:
internal static class Extensions
{
private static char _connectionStringPartsSeparator = ';';
private static string AsConnectionStringWithKeysRemoved(string connectionString, string[] connStringKeysToRemove)
{
var connStringParts = connectionString.Split(_connectionStringPartsSeparator);
var builder = new StringBuilder();
foreach (var part in connStringParts.Where(_ => !string.IsNullOrEmpty(_)))
{
var partKeyValue = part.Split('=');
var key = partKeyValue[0];
if (connStringKeysToRemove.Any(_ => _.Equals(key)))
{
continue;
}
builder.Append(part);
if (part[^1] != _connectionStringPartsSeparator)
{
builder.Append(_connectionStringPartsSeparator);
}
}
return builder.ToString();
}
public static void TransformAzureSqlConnectionStringForClientSecretAuth(this IConfiguration configuration)
{
var azureSqlConnectionString = configuration["ConnectionStrings:SqlServer"]!;
configuration["ConnectionStrings:SqlServer"] =
AsConnectionStringWithKeysRemoved(azureSqlConnectionString, new[] { "Uid", "Authentication" });
}
}
The appsettings.json
should look like this:
{
"ConnectionStrings": {
"SqlServer": ""
},
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
},
"DbUpdate": {
"Timeout": "00:01:00.000",
"ConnectionStringName": "SqlServer"
},
"KeyVault": {
"Enabled": true,
"Url": "https://#{keyVaultName}#.vault.azure.net/"
},
"DatabaseAadAccess": {
"Enabled": true
},
"AzureIdentity": {
"Enabled": true,
"ClientId": "#{clientId}#",
"ClientSecret": "#{clientSecret}#",
"TenantId": "#{azAccountTenantId}#"
}
}
There’s no need to add the ConnectionString for database, it will be taken from keyvault automatically. There’s also no need to add appsettings.json for different environments.
2. Create the Scripts folders
Create two folders at the root of the project, one called DownScripts and the other called UpScripts. This two folders will contain the scripts to update the database and the scripts to revert. Example of Script named InitialSetup: {YYYYMMDD}_{HHmmss}_InitialSetup.sql
SET XACT_ABORT ON
SET NOCOUNT ON
BEGIN TRAN
BEGIN TRY
PRINT 'Bootstrapping <ServiceName> schema...';
IF (SCHEMA_ID('<SchemaName>') IS NULL)
BEGIN
EXEC ('CREATE SCHEMA [<SchemaName>] AUTHORIZATION [dbo]')
PRINT 'Schema [<SchemaName>] - created.';
END
ELSE
PRINT 'Schema [<SchemaName>] - already in place.';
COMMIT
END TRY
BEGIN CATCH
ROLLBACK;
THROW;
END CATCH;
Example of DownScript named InitialSetup: {YYYYMMDD}_{HHMMss}_InitialSetup.sql In order to be able to execute the DowngradeToMigration action, the Up and Down scripts should have the same name
SET XACT_ABORT ON
SET NOCOUNT ON
BEGIN TRAN
BEGIN TRY
IF (SCHEMA_ID('<SchemaName>') IS NOT NULL)
BEGIN
EXEC ('DROP SCHEMA [<SchemaName>]')
PRINT 'Schema [<SchemaName>] - dropped.';
END
COMMIT
END TRY
BEGIN CATCH
ROLLBACK;
THROW;
END CATCH;
Each down script should undo the corresponding up script.
Lastly, make sure the following lines are in the .csproj:
<ItemGroup>
<None Remove="appsettings.json" />
<None Remove="DownScripts\*" />
<None Remove="UpScripts\*" />
</ItemGroup>
<ItemGroup>
<Content Include="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
</Content>
</ItemGroup>
<ItemGroup>
<EmbeddedResource Include="DownScripts\*">
<CopyToOutputDirectory>Never</CopyToOutputDirectory>
</EmbeddedResource>
<EmbeddedResource Include="UpScripts\*">
<CopyToOutputDirectory>Never</CopyToOutputDirectory>
</EmbeddedResource>
</ItemGroup>
This will ensure that no matter how many scripts you add, the .csproj file will be clean. No lines should be added when you add new scripts.
That’s all for the migration project.
3. Changes into Service Project
If the database hasn’t been created yet, make sure to uncomment the corresponding section in the infrastructure.bicep
file:
At the root of the project, there’s a package.yml
and a pullrequest.yml
.
##package.yaml Set the following variable in the package.yml in the variables section:
- name: databaseMigrationCsprojName
value: '<DatabaseMigrationProjectName>'
In the jobs section below set parameter publishDatabaseMigration: 'true'
##pullrequest.yaml Set the following variables in the pullrequest.yml in the variables section:
- name: databaseProjectLocation
value: <DatabaseMigrationProjectName>
- name: databaseMigrationCsprojName
value: <DatabaseMigrationProjectName>
In the jobs section below set parameter runDatabaseMigration: 'true'
##Variable groups The last change is to add the following variables to the Library in Azure DevOps. Some may not have permissions, but a DevOps ambassador can do it instead. Both Prd and Lab var groups should be modified. Pattern for variable groups name:
- Lab-
-Common - Prd-
-Common
Variables to set/create:
databaseMigrationCsprojName
=<DatabaseMigrationProjectName>
(without .csproj and no " nor ‘)publishDatabaseMigration
=true
________________________________________________________________________________________________
Service Template Guide
This document describes how to use the service template.
NOTE: Please visit the Service Documentation for a detailed description of this service and its Runbook.
Health
Template service exposes 2 health endpoints that are needed for deployment to work properly in kubernetes:
livez
- simple endpoint for determining whether service is alive, it is called by liveness probe,readyz
- health endpoint for determining whether service can accept any requests, it is called by readiness probe.startz
- health endpoint for determining whether service is able to start. It is executed once just the application run. There can be checked whether eg. all is configured properly (like a Service Bus)
Readiness probe will check all dependencies of service that are required for service to complete it’s business value. By default there are configured two dependencies sql connection and idp connection.
Service Configuration
By default KeyVault is used to store all secrets and configuration that differs between environments. Applications connect to KeyVault via Service Principal (in the future this could be a Managed Identity). There is a separate KeyVault per application per environment.
KeyVault secrets can be created in multiple ways:
- copied from environment keyvault
- as an output from resources created in bicep file
- created manually by DevOps or DevOps Ambassadors
By convention appsettings.json should not contain any tokens #{token}#.
Configuration providers
Configuration into service is loaded from couple of different places:
appsettings.json
- environment variables
- command line
- Azure Key Vault - disabled for local development
appsettings.Development.json
- for local development only
The order provided in previous list matters, in such a way, that if same key is defined in different providers, value declared by provider later in this list takes precedence.
Configuration via AzureKeyVault
Is configured by default: yes Is enabled by default: yes
Azure Key Vault is a used to securely store keys, passwords and other sensitive data and is used as one of configuration providers for the service.
All keys defined in it are be loaded into IConfiguration
object and can be used to configure service.
Important Access to AzureKeyVault is NOT configured with tokens in appsettings.json
file. Service Principal configuration is provided by environment variables.
Configuration:
"KeyVault": {
"Enabled": true,
"Url": "",
"TennantId": "",
"ClientId": "",
"ClientSecret": ""
}
Enabled
- whether or not KeyVault integration is enabled. Enabled by default, locally disabled.VaultUri
- Azure Key Vault uri,TennantId
- Azure Active Directory tenant Id of the service principal,ClientId
- client (application) ID of the service principal,ClientSecret
- client secret that was generated for the App Registration used to authenticate the client
All keys except of Enabled
are required and if KeyVault integration is enabled, lack of any of those keys will prevent application from starting.
Keys in Azure Key Vault must conform to simple rule. They can only contain alphanumerical values or dashes.
Dashes are special characters to declare nested keys e.g. Logging--Elastic--Nodes
.
Elasticsearch logging
Is configured by default - yes
Is enabled by default - yes
Tokens taken from KeyVault: Logging–Elastic–Node Logging–Elastic–Credentials–Username Logging–Elastic–Credentials–Password
Logging to Elasticsearch is configured in Logging:Elastic
section of appsettings.
To enable it locally:
- run Elasticsearch
you can use Docker to do that e.g. using a command (update version of ES when necessary):
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.3
- update
Logging:Elastic
section ofappsettings.Development.json
to contain:
{
...
"Logging": {
"Elastic": {
"Enabled": true,
"Node": "http://localhost:9200/"
}
},
...
}
####For more information about logging refer to Wiki
Elastic APM
Is configured by default - yes
Is enabled by default - yes
Tokens taken from KeyVault: Logging–ElasticApm–ServerUrl Logging–ElasticApm–SecretToken
Elastic APM is a dependency that measures performance of application dependencies e.g. duration of HTTP calls or execution of SQL queries.
Configuration is done in ElasticApm
section in appsettings.json
file e.g.
{
...
"ElasticApm": {
"Enabled": true,
"ServerUrl": "",
"SecretToken": "",
"TransactionSampleRate": 1.0,
"CaptureBody": "errors",
"ServiceName": "ServiceName"
},
...
}
There are multiple components of Elastic APM that can be configured in the same section.
Enabled
- indicated whether or not to trace all dependencies,ServiceName
- name of the service under which it will be visible in Elastic APM interface,TransactionSampleRate
- indicated how much percentage of transactions to track,
ServerUrl
- Elastic APM server url,SecretToken
- token used to authenticate to Elastic APM server,
CaptureBody
- indicated whether or not to tractPOST
requestsBODY
. Available optionsoff
,errors
,transactions
,all
####For more information about logging refer to Wiki
Sentry
Is configured by default - no
Is enabled by default - no
Comment: Before enabling Sentry logging, you need to create project in Sentry on Lab and Prd and fill in SentryDsn in Lab and Prd KeyVault. Tokens taken from KeyVault:
Logging–Sentry–Dsn
Configuration is done in Sentry
section in appsettings.json
file e.g.
{
...
"Sentry": {
"Enabled": false,
"IncludeRequestPayload": true,
"SendDefaultPii": true,
"MinimumBreadcrumbLevel": "Debug",
"MinimumEventLevel": "Error",
"AttachStacktrace": true,
"Debug": true,
"DiagnosticLevel": "Error",
"RateLimitInMilliseconds": 2000,
"Dsn": ""
},
...
}
####For more information about logging refer to Wiki
Feature toggle
Is configured by default - yes Is enabled by default - yes
Tokens taken from KeyVault: FeatureToggle–ConfigCatBlobUrl FeatureToggle–ConfigCatSdkKey
For now we are using Optimizly and ConfigCat as feature toggles services, that’s why currently we need to configure both providers.
Configuration is defined in FeatureToggle
section in appsettings.json
.
"FeatureToggle": {
"ConfigCatBlobUrl": "",
"ConfigCatSdkKey": ""
}
- ConfigCatBlobUrl - directory of ConfigCat feature toggles file in blob storage, this is not a full path, because of how ConfigCat works, this is a root directory for environment, to which will be appended
configuration-files/{SdkKey}/config_v5.json
, - ConfigCatSdkKey - API key for accessing ConfigCat.
For more options please refer to Ev.FeatureToggle repository.
For local development Optimizly and ConfigCat feature toggles are disabled by setting “Enable” flag to false For more information refer to Feature Toggle Local provider.
ServiceBus
Is configured by default - yes
Is enabled by default - no
For Service Bus messaging, we are using Ev.ServiceBus NuGet package. You check the documentation of this Nuget before going further.
Comment: By default source code references testqueue which does not exist and if enabled creates many errors in service output. You need to change the code to reference existing queues or subscriptions before enabling this feature
Tokens taken from KeyVault: ConnectionStrings–ServiceBus
Configuration to ServiceBus is done in two places. First is connection string to ServiceBus namespace is in ConnectionStrings
section.
Update it in appsettings.Development.json
to avoid accidental push to azure devops.
Other settings are in ServiceBus
section:
"ServiceBus": {
"Enabled": true,
"ReceiveMessages": true
},
Default values for those options for local development are in launchSettings.json
file and are set to false
.
Creating new queues/topics/subscriptions
Our current infrastructure supports creating topics and queues from single source of truth, which is ARM template located in Borat
.
Because of that please add topic/queue configuration to that file.
After that, you’ll need to declare the resource in your application, check the documentation for that.
SQL connection
Is configured by default - yes
Is enabled by default - yes
Tokens:
#{EcoPortalDataBase}#
- taken value taken from linked variable groups per environment:
- Lab-Database-{Environment}
- Prd-Database
By deafult application will connect to local SQL server to ecoPortal
database.
If you want to customize connection string, please use either appsettings.Development.json
or launchProfile.json
file.
AzureSql database configuration steps
- Create DatabaseMigrationProject.
- Configure
package.yaml
- Set variable
databaseMigrationCsprojName
value with the previously created DatabaseMigrationProject name. - Change
publishDatabaseMigration
value totrue
.
- Set variable
- Configure
pullrequest.yaml
- Set variable
databaseProjectLocation
value with the previously created DatabaseMigrationProject name. - Set variable
databaseMigrationCsprojName
value with the previously created DatabaseMigrationProject name. - Change
runDatabaseMigration
value totrue
.
- Set variable
- Uncomment
module service_sqlDatabase
section ininfrastructure.bicep
file. - Keyvault secret
ConnectionStrings--SqlServer
with correct value should be automatically created after first deployment.
Healthcheck
For SQL connection you can override default timeout (2 seconds) in configuration file
{
...
"Sql": {
"HealthcheckTimeoutSeconds": 2
},
...
}
Authentication
If you are using a BFF, use the BFF pattern. For more information refer to
Ev.Authentication.Bff-Implementation-guide
Run Service locally (Advanced)
Service might need other services running in the backgroup.
To prepare dockerized environment run .\docker-setup.ps1
in Windows Powershell (NOT Core) with elevated permissions(you may need Azure Cli installed).
Once completed you will have all needed dependencies running in the background with printed out in the console “borat-api” access token that can be used for authentication.
Other scripts in the repository:
.\docker-start.ps1
- once you have things setup, you can use this script just to start dockerized development environment.\docker-pull.ps1
- run to get latest image from docker repository.\get-api-token.ps1
- get the “borat-api” token from IDP running in the background and print it to the console
Once dependencies are running open solution in VS and press F5. Swagger should get opened.
Once running authorize to swagger with access token from the console and execute “hello” via swagger or simply curl -X POST "https://localhost:5302/rpc/VerticalDomain/hello" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer XXX" -d "{}"
. where XXX
is any valid access token.
Run image from ACR
az login (choose lab credentials) az acr login -n cicdcr01weuy01 docker pull cicdcr01weuy01.azurecr.io/vertical-domain:latest docker run -d -p 127.0.0.1:8080:80/tcp cicdcr01weuy01.azurecr.io/vertical-domain:latest