During the last DockerCon EU in Barcelona, I heard a lot about Helm with K8s architectures. It was also a good opportunity to write about it after attending to this conference.

blog 149 - 0 - banner

In a nutshell, Helm is a package manager for K8s and you may think of it like the other ones available on the Linux side with apt, yum or zypper to cite few of them. Helm charts are a key component of the Helm architecture and make deployments easy, standardized and reusable and this is definitely what I was looking for our current CI/CD pipeline implementation for DMK maintenance tool.

Helm matters for enterprise-scale deployments by addressing common challenges with the following (non-exhaustive) list of capabilities

  • Helm charts can be shared easily across the enterprise or with contributors over the world from GitHub repository.
  • Using helm charts allow to get quickly a specific environment for testing
  • Existing charts can be authored for specific deployments regarding the context
  • The easy deployment and deletion of applications make the Helm adoption easier
  • Production- ready packages are possible and eliminate deployment errors due to incorrect configuration files and reduce the complexity of maintaining application catalog

In my case, it’s been a while since I have in mind to simplify my first SQL Server container deployments on K8s with a complex YAML file including a lot of objects like services, pods, secrets and persistent volumes with Helm charts. One additional motivation was the capability to change in-flight some preconfigured settings in the deployment when I wanted to switch from my minikube environment to my AKS cluster on Azure.

In this first write-up I used a custom dbi services image for SQL Server (a production-ready docker image) and I decided to use this image as based of my custom Helm chart. First of all, let’s say I didn’t start from scratch and I used the mssql-linux stable chart available of GitHub but obviously I customized it for my own requirements:

  • The custom dbi services image for SQL Server includes the creation of the flexible architecture and I had to update the persistence volume and claims configuration with this new storage map.
  • The custom image leverages the deployment of our DMK maintenance tool (optional) that includes different SQL objects to perform maintenance of customer databases (basically update stats, rebuild index and backup tasks). So, I needed to add a parameter to enable or not the deployment of this tool inside the pod.
  • TSQL scripts are also executed during the container startup and they apply different server level configuration, configure tempdb database files placement and add some trace flags to meet our best practices. But no real impact on the helm chart here.
  • An “application” user may be created (optional) and will be part of the db_creator server role according to the least privilege principle. In most cases we consider an application doesn’t need sysadmin privileges even on a SQL Server pod and more generally speaking on microservice architectures. So as already done for the DMK parameter described previously, I had to add another one parameter for creating this user when the pod is spin up.

Let’s first begin with my helm chart hierarchy folder which includes important files including Chart.yaml, values.yaml and deployment.yaml.

[dab@DBI-LT-DAB:#]> tree /f
…
T:.
│   .helmignore
│   Chart.yaml
│   values.yaml
│
├───charts
└───templates
        deployment.yaml
        NOTES.txt
        pvc-app.yaml
        pvc-backup.yaml
        pvc-data.yaml
        pvc-tempdb.yaml
        pvc-tranlog.yaml
        secret.yaml
        service.yaml
        _helpers.tpl

 

Let’s focus on the deployment.yaml file and the customized part within the spec.containers.env section related to my docker image specifications:

  • MSSQL_USER, MSSQL_USER_PASSWORD are environment variables related to my “application” user
  • DMK environment variable enables deployment of the DMK maintenance tool

In addition, the environment variables related to the database file placement have been customized for master, tempdb and user databases according to my flexible architecture specifications with:

  • /u00 (for application files)
  • /u01 (for user data and system database files)
  • /u02 (for transaction log files)
  • /u03 (for tempdb database files)
  • /u98 (for backup files).

MountPaths and persistent volume claims section have also been updated accordingly as shown below:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "mssql.fullname" . }}
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- if .Values.deployment.annotations }}
  annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
{{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "mssql.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "mssql.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: ACCEPT_EULA
              value: "{{ .Values.acceptEula.value | upper }}"
            - name: MSSQL_PID
              value: "{{ .Values.edition.value }}"
            - name: MSSQL_SA_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-sa-secret
                 key: sapassword
            - name: MSSQL_USER
              value: "{{ .Values.usersql.value }}"
            - name: MSSQL_USER_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-user-secret
                 key: userpassword
            - name: DMK
              value: "{{ .Values.DMK.value }}"
            - name: MSSQL_MASTER_DATA_FILE
              value: /u01/sqlserverdata/mssqlserver/master.mdf
            - name: MSSQL_MASTER_LOG_FILE
              value: /u01/sqlserverdata/mssqlserver/mastlog.ldf
            - name: MSSQL_DATA_DIR
              value: /u01/sqlserverdata/mssqlserver
            - name: MSSQL_LOG_DIR
              value: /u02/sqlserverlog/mssqlserver
            - name: MSSQL_TEMPDBDATA_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_TEMPDBLOG_DIR
              value: /u03/sqlservertempdb/mssqlserver
            - name: MSSQL_BACKUP_DIR
              value: /u98/sqlserver/backup/mssqlserver
            - name: MSSQL_ERROR_LOG
              value: /u00/app/sqlserver/admin/mssqlserver/log
            - name: MSSQL_DUMP_DIR
              value: /u00/app/sqlserver/admin/mssqlserver/dump
            - name: MSSQL_TCP_PORT
              value: "{{ .Values.service.port.value }}"
            - name: MSSQL_LCID
              value: "{{ .Values.lcid.value }}"
            - name: MSSQL_COLLATION
              value: "{{ .Values.collation.value }}"
            - name: MSSQL_ENABLE_HADR
              value: "{{ .Values.hadr.value }}"
            {{ if .Values.resources.limits.memory }}
            - name: MSSQL_MEMORY_LIMIT_MB
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            {{ end }}
          ports:
            - name: mssql
              containerPort: {{ .Values.service.port.value }}
          volumeMounts:
            - name: data
              mountPath: /u01
            - name: transactionlog
              mountPath: /u02
            - name: tempdb
              mountPath: /u03
            - name: backup
              mountPath: /u98 
            - name: app
              mountPath: /u00
          livenessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
          readinessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      volumes:
      - name: master
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingMasterClaim }}
          claimName: {{ .Values.persistence.existingMasterClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-master
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: data
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingDataClaim }}
          claimName: {{ .Values.persistence.existingDataClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-data
          {{- end -}}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: transactionlog
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTransactionLogClaim }}
          claimName: {{ .Values.persistence.existingTransactionLogClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-translog
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: tempdb
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingTempdbClaim }}
          claimName: {{ .Values.persistence.existingTempdbClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-tempdb
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: backup
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingBackupClaim }}
          claimName: {{ .Values.persistence.existingBackupClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-backup
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}
      - name: app
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingApppClaim }}
          claimName: {{ .Values.persistence.existingAppClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-app
          {{- end }}
      {{- else }}
        emptyDir: {}
      {{- end }}

 

Referring to my flexible architecture, I added 2 YAML files that contain the new persistent volumes definition for respectively pvc-app for /u00 (app) and pvc-tempdb for /u03 (tempdb).

Here the content of my persistent volume claim for tempdb for instance:

{{- if and .Values.persistence.enabled (not .Values.persistence.existingTempdbClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "mssql.fullname" . }}-tempdb
  labels:
    app: {{ template "mssql.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
{{- if .Values.persistence.annotations }}
  annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
  accessModes:
    - {{ .Values.persistence.tempdbAccessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.tempdbSize | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end -}}

 

I added to the secret.yaml to include a section dedicated to my “application” user password

---
apiVersion: v1
kind: Secret
metadata:
  name: {{ template "mssql.fullname" . }}-user-secret
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
type: Opaque
data:
  {{ if .Values.userpassword }}
  userpassword:  {{ .Values.userpassword.value | b64enc | quote }}
  {{ else }}
  userpassword: {{ randAlphaNum 20 | b64enc | quote }}
{{ end }}

 

Note the helm chart allows you to define your own password or if empty it will generate an random password instead.

Finally, the values.yaml file contains predefined values for my release deployment

# General parameters
acceptEula: 
  value: "Y"
edition: 
  value: "Developer"
DMK: 
  value: "N"
collation: 
  value: SQL_Latin1_General_CP1_CI_AS
lcid: 
  value: 1033
hadr: 
    value: 0
# User parameters
sapassword: 
  value: Password1
usersql: 
  value: dbi_user
userpassword: 
  value: Password2
# Image parameters
image:
  repository: dbi/mssql-server-linux
  tag: 2017-CU12
  pullPolicy: IfNotPresent
# Service parameters
service:
  type: 
    value: LoadBalancer
  port: 
    value: 1433
  annotations: {}
deployment:
  annotations: {}
# Volumes & persistence parameters
persistence:
  enabled: true
  # existingDataClaim:
  # existingTransactionLogClaim:
  # existingBackupClaim:
  # existingMasterClaim:
  # existingAppClaim:
  # existingTempdbClaim:
  storageClass: ""
  dataAccessMode: ReadWriteOnce
  dataSize: 5Gi
  transactionLogAccessMode: ReadWriteOnce
  transactionLogSize: 5Gi
  tempdbAccessMode: ReadWriteOnce
  tempdbSize: 5Gi
  backupAccessMode: ReadWriteOnce
  backupSize: 5Gi
  masterAccessMode: ReadWriteOnce
  masterSize: 5Gi
  appAccessMode: ReadWriteOnce
  appSize: 5Gi
# Probe parameters
livenessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
readinessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
# Resourcep parameters
resources:
  limits:
  #  cpu: 100m
    memory: 5Gi
  # requests:
  #  cpu: 100m
  #  memory: 2Gi
nodeSelector: {}
  # kubernetes.io/hostname: minikube

Let’s install my environment release from the helm command below:

$ helm install --name sqlhelm . --set DMK.value=Y --set service.port.value=1451

 

Pretty simple right? Note also that I may change predefined parameter values according to my context very easily. For instance, the DMK maintenance tool is not installed by default when the container is spin up by default and I changed it by explicitly setup the DMK.value to Y. The same applies for the SQL Server port exposed through the service, by default 1433 changed to 1451 in my helm command.

The result is as follows:

LAST DEPLOYED: Mon Dec 17 23:23:26 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
sqlhelm-dbi-mssql-linux-app       Bound   pvc-5faffb52-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-backup    Bound   pvc-5fb0c43a-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-data      Bound   pvc-5fb32657-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-tempdb    Bound   pvc-5fb680fe-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s
sqlhelm-dbi-mssql-linux-translog  Bound   pvc-5fbb9350-024a-11e9-bd56-00155d0013d3  5Gi       RWO           hostpath      8m57s

==> v1/Service
NAME                     TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)         AGE
sqlhelm-dbi-mssql-linux  LoadBalancer  10.99.4.205  localhost    1451:32569/TCP  8m57s

==> v1beta2/Deployment
NAME                     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
sqlhelm-dbi-mssql-linux  1        1        1           1          8m57s

==> v1/Pod(related)
NAME                                      READY  STATUS   RESTARTS  AGE
sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr  1/1    Running  0         8m56s

==> v1/Secret
NAME                                 TYPE    DATA  AGE
sqlhelm-dbi-mssql-linux-user-secret  Opaque  1     8m57s
sqlhelm-dbi-mssql-linux-sa-secret    Opaque  1     8m57s

 

This command provides a picture of the deployed components and their different status including the persistent volume claims, my SQL Server pod, the service that exposes the SQL Server port and the K8s secrets for sa and my “application” user passwords. This picture is available at any moment by executing the following command:

$ helm status sqlhelm

 

We may also retrieve a list of existing releases from the following helm command:

$ helm ls sqlhelm
NAME    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
sqlhelm 1               Mon Dec 17 23:23:26 2018        DEPLOYED        dbi-mssql-linux-1.0.0   1.0             default

 

It’s worth noting that each resource is identified by labels (a very powerful feature on K8s) and we may easily get components installed and related to my release by filtering by the corresponding label (app or release) as follows:

$ kubectl get all -l release=sqlhelm
NAME                                           READY     STATUS    RESTARTS   AGE
pod/sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr   1/1       Running   1          16h

NAME                              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/sqlhelm-dbi-mssql-linux   LoadBalancer   10.99.4.205   localhost     1451:32569/TCP   16h

NAME                                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sqlhelm-dbi-mssql-linux   1         1         1            1           16h

NAME                                                 DESIRED   CURRENT   READY     AGE
replicaset.apps/sqlhelm-dbi-mssql-linux-67c4898dfb   1         1         1         16h

 

Let’s just take a look at my SQL Server pod log and let’s focus on the different custom steps applied during the startup of the corresponding pod. All the custom steps are well executed with input values from the values.yaml files.

kubectl logs sqlhelm-dbi-mssql-linux-67c4898dfb-qlfgr

======= 2018-12-17 22:29:44 Configuring tempdb database files placement OK =======
======= 2018-12-17 22:29:44 Configuring max server memory =======
2018-12-17 22:29:45.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-12-17 22:29:45.03 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3840. Run the RECONFIGURE statement to install
…
======= 2018-12-17 22:29:45 Configuring max server memory OK =======
======= 2018-12-17 22:29:45 Creating login dbi_user =======
======= 2018-12-17 22:29:45 Creating login dbi_user OK =======
======= 2018-12-17 22:29:45 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
======= 2018-12-17 22:30:08 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======

 

Finally let’s connect from mssql-cli utility to my SQL Server pod and let’s check if everything is ok from a configuration perspective:

master> select name as logical_name, physical_name
....... from sys.master_files;
+----------------+-------------------------------------------------+
| logical_name   | physical_name                                   |
|----------------+-------------------------------------------------|
| master         | /u01/sqlserverdata/mssqlserver/master.mdf       |
| mastlog        | /u01/sqlserverdata/mssqlserver/mastlog.ldf      |
| tempdev        | /u03/sqlservertempdb/mssqlserver/tempdb.mdf     |
| templog        | /u03/sqlservertempdb/mssqlserver/templog.ldf    |
| tempdbdev_2    | /u03/sqlservertempdb/mssqlserver/tempdb2.ndf    |
| tempdbdev_3    | /u03/sqlservertempdb/mssqlserver/tempdb3.ndf    |
| tempdbdev_4    | /u03/sqlservertempdb/mssqlserver/tempdb4.ndf    |
| modeldev       | /u01/sqlserverdata/mssqlserver/model.mdf        |
| modellog       | /u01/sqlserverdata/mssqlserver/modellog.ldf     |
| MSDBData       | /u01/sqlserverdata/mssqlserver/MSDBData.mdf     |
| MSDBLog        | /u01/sqlserverdata/mssqlserver/MSDBLog.ldf      |
| dbi_tools      | /u01/sqlserverdata/mssqlserver/dbi_tools.mdf    |
| dbi_tools_log  | /u02/sqlserverlog/mssqlserver/dbi_tools_log.ldf |
+----------------+-------------------------------------------------+

 

The database file placement meets my flexible architecture requirements. The DMK maintenance tool is also deployed correctly with the dbi_tools database as show below:

master> use dbi_tools;
Commands completed successfully.
Time: 0.253s
dbi_tools> select name as table_name
.......... from sys.tables;
+-----------------------------------+
| table_name                        |
|-----------------------------------|
| dbi_maintenance_task_logs         |
| dbi_maintenance_task_details_logs |
| dbi_maintenance_configuration     |
| __RefactorLog                     |
+-----------------------------------+

 

sa and dbi_user (name by default in my template) logins are available for sysadmin and classical user connections.

master> select name AS login_name
....... from sys.server_principals
....... where type = 'S' and name not like '##%##';
+--------------+
| login_name   |
|--------------|
| sa           |
| dbi_user     |
+--------------+

 

Deployment is done successfully! It was a brief overview of Helm capabilities with SQL Server and other write-ups will come soon!

Happy deployment!

By David Barbarin