Jenkins as code, part 2: Setting up the Jenkins job

This is the follow up of the previous blogpost about Jenkins as code, which you can find here. With the previous blogpost we discussed how we have created a Docker image, containing Jenkins and several files needed to correctly install the plugins and configure Jenkins with several yaml files. The yaml configuration files are used by the configuration-as-code plugin to configure the Jenkins environment how we want to have it configured. This means that no manual changes are needed in the UI.

Just like mentioned in the first blogpost:

Before we do anything I just want to remind you that this is just 1 way to achieve a Jenkins as code setup. It does not mean that this is the only or best way, it is just one way. Next to that, these blogposts and the code in the Github repository will help you kickstart your own setup and by no means you can just run it on a production environment and blame me if something is not working fine. During the blogposts I will tell you how I was able to do things, so you can redo it all yourself (and compare it with the code in the Github repository).

Seed All DSL Jobs

With this blogpost, we will continue the job-dsl.yaml file. This is the yaml file which will make sure that once Jenkins is running, a “Seed All DSL jobs” job” is configured and loads groovy files from a specific directory in a git repository. So lets continue with these groovy files. It basically consists of 2 parts, we set some variables and have several class and functions to generate the jobs and list views. 

The example.groovy file top part are some variables we need to configure.

@Field String jenkinsCredentialId = "SSH_GIT_KEY"
@Field String basePath = 'example'
@Field String defaultPollingScm = 'H/5 * * * *'

JobConstructor[] jobList = [
        [
                "example-repo",
                "https://bitbucket.org/wernerdijkerman/this-is-some-test.git",
                defaultPollingScm
        ]
]

The first one is easy, that is the name of our Jenkins credential that we are using for authorizing to the Git server. The basePath is the value which is used for creating a “directory” and a specific “tab” with this name. See the screenshot. Then we have the defaultPollingScm, which we will configure that each job will check once every 5 minutes for changes. Ideally there should be webhooks configured on the Git server, but this only works if Jenkins is available from the Git server (which is in my situation not the case)

The jobList is basically a list with all of the repositories that belong to this “example” group. For my Terraform repositories, I would have a file named terraform.groovy, with a basePath set to ‘terraform‘ and in the jobList variable, all repositories configured related to Terraform. And if you need to add (or remove) a (terraform) repository, you will update this jobList variable by either adding or removing the repository information. Once the change is merged into main (or master), with a max of 15 minutes the new job is added (or deleted) because the Seed DSL job has ran.

The rest of the file contains some specific functions to actual create the Jobs and listviews. You can see on this page https://jenkinsci.github.io/job-dsl-plugin/ with all the possible functions (and the parameters) you can use to make it your own.

Shared Library

A shared library allows you to use code that can be used in Jenkinsfile’s. The problem when you have lets say 50 Spring Boot java microservices, you probably have 50 Jenkinsfiles that are the same, with the exception of the name of the microservice. So that is a lot of duplicate code in the various repositories, especially if these Jenkinsfiles are really large. So the goal is to create small(er) Jenkinsfiles as we still need to have a Jenkinsfile in the repositories. Because we create smaller Jenkinsfile, we can even provide arguments that can be used for correctly generating the Jenkinsfile.

This is how our Jenkinsfile will look like, for our example service:

@Library('djwasabi') _
def run = new com.djwasabi.common.examplePipeline()
def NAME = "Wasabi"
run.pipeline('test-repo', NAME)

With the first line we specify which Shared Library we want to make use. You need to go to “Manage“, “Configure system” and then look for “Global Pipeline Libraries“. When you did a docker compose up from my repository, you would see that there is a Shared Library named “djwasabi“. This shared library is from a git repository and is using the master branch as the version (You could also specify a tag or branch or even an git commit). 

To understand the 2nd line, you will need to go to https://bitbucket.org/wernerdijkerman/jenkins-shared-library/src/master/ (or look into the already earlier provided github repository in the “library” folder), you will see a specific directory structure which is used within the groovy language. You will see an examplePipeline.groovy file, which will be loaded to create a new groovy object.

The 3rd line we set a variable named NAME and set it to “Wasabi“. With the 4th line we will execute the function inside the examplePipeline.groovy file named ‘pipeline‘. This function accepts multiple parameters and we only provide 1 called “NAME“.

Let us take a look at the examplePipeline.groovy file:

package com.djwasabi.common

import com.djwasabi.common.workers.*
import groovy.transform.Field

def pipeline(jobName, name = "world", agentNode = "worker") {
    def command = new Command(this)
    def git = new Git(this)
    def resultsGetter = new ResultsGetter(this)

    node(agentNode) {
        try {
            def lastResult = currentBuild.rawBuild.getPreviousBuild()?.result

            stage("Checkout") {
                git.checkout()
                tagged = git.isCurrentCommitAlreadyTagged()
            }
            if (!tagged || hudson.model.Result.SUCCESS != lastResult) {
                stage("Run command") {
                    command.echo("hello " + name + " via job " + jobName)
                }
            } else {
                resultsGetter.repeatPreviousBuildResult(currentBuild)
            }
        } catch (all) {
            stage('Destroy it') {
                command.echo('lets run when things go wrong.')
            }
        }
    }
}

As you see with the examplePipeline.groovy file, the function “pipeline” accepts multiple arguments. The first one “jobName” does not have a default and thus it is always required, but the other 2 has default (resp. “world” and “worker“). Have you noticed that the “agentNode” parameter contains the value “worker“, which is also the value for our configuration we used for starting the Docker container in the previous blog post? So if you write your own shared library, you can make 1 or more of these node definitions and based on parameters that are set in the Jenkinsfile, certain actions can be executed.

When we run the job in Jenkins, you will see something like the above showing up. Almost at the end of the log, you will see the following line:
hello Wasabi via job test-repo

That is basically logged via this piece of code that was generated in the examplePipeline.groovy file:

                stage("Run command") {
                    command.echo("hello " + name + " via job " + jobName)
                }

Summary

In these 2 parts I helped you to explain what the possibilities are for an Jenkins as code approach. This approach will have everything set into code and when someone wants to make a change in Jenkins, the person should be creating a change into the code. When you have properly setup your Git server, this person will create a branch, make the changes and will create a Pull Request before it is merged into master|main. With everything set into code, people have an audit log of what has been changed (and by whom), so it will force you into having a discussion when things are needed (or when something went wrong).

And what I personally also like, because everything is set into code you can deploy a Jenkins server somewhere else and you can almost immediately use it. Especially when you make the Configuration as Code yaml files configurable with environment variables, you can run multiple Jenkins setups from the same code base. And because everything is in code, you don’t have to worry about backing up Jenkins (It is already on the Git server and part of their backup mechanism).

I hope you enjoyed it and that it showed you that you can too automate the configuration of any Jenkins environment you want to run in your organisation. If you have any improvements and/or remarks, please let me know.

May the force (or source) be with you!

Advertisement

Jenkins as code, part 1: Setting up Jenkins in Docker

I hate doing things manually, I really do.

Log into an UI, do some clicks here and there to be able to have something created or configured. It is error prone (you can easily forget something or make a typo) and it is stupid and/or boring (Especially if you need to do this on a routine basis). If you can change something in a UI, then someone is able to change that as well and even do that without you knowing that it is changed (Or viceversa ;-)). So doing things manually is not the way forward and we should focus on automation. Automation is one of the pillars of doing DevOps, so we should always automate things right?

What people do probably not know, Jenkins is a tool that can be fully automated, you only have to know how. (And based on some posts on for example Reddit, I don’t think people knows that this is even possible).

Jenkins as code

So lets dive into, what I would say as: Jenkins as code.

This will basically be a 2 part blog post where we will discuss the following:

  1. This part where we will create a Docker image, containing Jenkins with its configuration files and plugins;
  2. The next part, where we will create a shared library and use that in a Jenkinsfile, with jobs we load via a “specifications” repository;

Before we do anything I just want to remind you that this is just 1 way to achieve a Jenkins as code setup. It does not mean that this is the only or the best way, it is just one way. Just like that there are 1000 ways to go to Rome. Next to that, these blog posts and the code in the Github repository will help you kickstart your own setup and by no means you can just run it on a production environment and blame me if something is not working fine. I am not a groovy expert and I can do some basic things, so don’t expect a new world wonder. During the blog posts I will tell you how I was able to do things, so you can redo it all yourself (and compare it with the code in the Github repository) and build it on your own terms/setup.

In both blog posts, you will see a lot of code and commands popping up. But no worries, all code is available on my Github in repository https://github.com/dj-wasabi/blog-jenkins-as-code . So lets start with the first part: Setting up Jenkins.

Docker(file)

We will create a Docker image, based on “jenkins/jenkins:lts” Docker image, install Docker and configure it with the configuration-as-code plugin included with several yaml files that are used for configuring Jenkins. And as part of the Github repository, we have a docker-compose.yaml file which we can use to boot our setup.

Lets start with the Dockerfile.

USER root
RUN groupadd docker && \
    curl -fsSL https://get.docker.com -o get-docker.sh && \
    sh get-docker.sh && \
    usermod -aG root jenkins && \
    usermod -aG docker jenkins
USER jenkins

Lets discuss this first, the “jenkins/jenkins:lts” Docker image does not contain the docker application, so we need to install that and make sure that the “jenkins” user is part of the “root” and “docker” group. We need Docker in this image, as each Jenkins job will run in its own Docker container.

ENV CASC_JENKINS_CONFIG=/var/jenkins_home/casc

We need set an environment variable named CASC_JENKINS_CONFIG, which we basically tell Jenkins where the configuration-as-code yaml files can be found.

COPY casc/ /var/jenkins_home/casc
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt

Then we will copy the contents on the casc directory to the earlier mentioned directory and we place the file with all of our plugins into a specific directory. Then we run the install-plugins.sh script so we can download and install all the plugins that we need in our setup.

And that is our Dockerfile, easy right? This will allow us to build a Jenkins Docker image, with all of our files and configuration that will lead to an Jenkins environment we want to have. We can deploy this on some host running Docker or even make some additional changes to make it work in Kubernetes.

Plugins

Lets go to the plugins.txt file, as this one is a bit easier to explain than the casc files.

We need to create a plugins.txt that contains all of the plugins we want to make use, so how do we do that. I manually (oh yes, sorry! :)) started a Jenkins container and followed the installation steps and then I picked several plugins to install during the installation steps. When Jenkins is running and I have finnished nstalling all plugins (don’t forget the “configuration-as-code” plugin), I went to “manage” and then clicked on “Script Console“. There you see an textfield to execute groovy scripts and I used the following script:

def plugins = jenkins.model.Jenkins.instance.getPluginManager().getPlugins()
plugins.each {println "${it.getShortName()}:latest"}

This is a “script” that provides an overview of all plugins that are currenlty installed in Jenkins. I have used the “latest” version of the plugin which is fine for demo purposes, but you could also update the “latest” to ${it.getVersion()}. This will show the actual version of the installed plugin. I would suggest to use the versions in the plugins.txt file. This helps you in the future when someone create’s an PR that it shows you that there is an update in the version of a plugin, which you won’t see when it is using “latest”.

Then you hit the “Run” button and you will see some output appear. Select the output and place that in the plugins.txt file and you are done (I would also sort the contents of the file, so all plugins are order alphabetical).

Configuration as code

Let us first explain how we can get a yaml file. Go to “Manage” and then you will see “Configuration as Code” (And click on it please) in the “System Configuration” lane. There is a button called “Download Configuration” which will download the yaml file and with “View Configuration” you can see the yaml file in your browser. When you have downloaded the yaml configuration file, you can make use of it in your Docker image by placing it in the casc/ directory. I would suggest you split it into seperate files, so you won’t have 1 large file but smaller ones with each a specific set of configuration. For example, create a credentials.yaml file containing all Jenkins credentials. 

But before you commit all your changes, you can also update some values by using environment variables, see the following:

  securityRealm:
    local:
      allowsSignup: false
      enableCaptcha: false
      users:
      - id: "${JENKINS_ADMIN_USERNAME:-admin}"
        name: "${JENKINS_ADMIN_NAME:-Administrator}"
        password: "${JENKINS_ADMIN_PASSWORD}"

This piece of the configuration you see now is responsible for creating an admin user. I don’t want to hardcode the username and definitely not the password in this file, so I use environment variables for that. And this is also the case for the credentials that Jenkins use, see the following example of the Jenkins “credentials”:

credentials:
  system:
    domainCredentials:
      - credentials:
          - basicSSHUserPrivateKey:
              scope: GLOBAL
              id: "SSH_GIT_KEY"
              username: "git"
              description: "SSH Credentials for jenkins"
              privateKeySource:
                directEntry:
                  privateKey: ${JENKINS_SSH_GIT_KEY}

With the above credential configuration I won’t have to hardcode the SSH Private key in the Docker image, but can use it as an environment variable. Nice right? 🙂

When everything is done via code, we can also already configure the Security Matrix and allowing what user can and most importantly can’t do in Jenkins. As my Jenkins is running on premise and don’t allow traffic from outside the environment, I will allow people to start jobs (if they don’t want to wait on the triggering). So I will allow anonymous people to have read, build and cancel rights for the jobs. Why go for all that trouble for letting people authenticate against some source, so we can see that this person has started or cancelled a job? Most importantly, they can’t change anything unless they know the admin password. (But will be undone when Jenkins is restarted! :))

  authorizationStrategy:
    globalMatrix:
      permissions:
      - "Job/Build:anonymous"
      - "Job/Cancel:anonymous"
      - "Job/Read:anonymous"
      - "Overall/Administer:admin"
      - "Overall/Read:anonymous"

Now we are able to fully do Jenkins as code, as we will store the yaml files in the casc/ directory which are loaded when Jenkins is started. But when Jenkins is running, we also need to make sure that we will load the jobs from somewhere. We will do this with a “Seed Job“, which you can see in the “dsl-jobs.yaml” file in the casc/ directory in the Github repository.

            git {
                remote { 
                    url "${JENKINS_JOB_DSL_URL}"
                    credentials 'SSH_GIT_KEY' 
                }
                branch '*/main'
              }
        }
        triggers {
            scm('H/15 * * * *')
        }
        steps {
          dsl {
            external('${JENKINS_JOB_DSL_PATH:-jobs}/*.groovy')
            removeAction('DELETE')
          }
        }
      }

When Jenkins is started, we will automatically create the “Seed all DSL jobs” Jenkins job. And what it does is basically the following (Snippet is incomplete, see for the compete file on Github for full version):

  1. We use the credential ‘SSH_GIT_KEY‘ to checkout the repository mentioned in ${JENKINS_JOB_DSL_URL} (See the docker-compose.yaml file)
  2. We use the ‘main’ branch;
  3. The job is executed every 15 minutes;
  4. In the directory named ${JENKINS_JOB_DSL_PATH} we will find groovy files and if Jenkins has jobs which aren’t configured in these groovy files, we delete the jobs from Jenkins.

Before we finalise the Configuration as code part, we need to discuss one last file (and action). When we have the Jenkins server running, we will run each job in its own Docker container. So the Jenkins server will start a Docker container and do all of its action inside that container and the configuration is what follows:

jenkins:
  clouds:
    - docker:
        name: "docker"
        dockerApi:
          dockerHost:
            uri: "${DOCKER_HOST:-unix:///var/run/docker.sock}"
        templates:
        - connector:
            attach:
              user: "jenkins"
          dockerTemplateBase:
            bindAllPorts: true
            image: "jenkins/agent:latest"
            privileged: true
            environment:
              - "TZ=Europe/Amsterdam"
          instanceCapStr: "99"
          labelString: "worker"
          name: "worker"
          remoteFs: "/home/jenkins/agent"

This is also seen in the file “docker.yaml“. Here we have placed 1 template which we named “worker“, with the “jenkins/agent:latest” Docker image. As you know, this is just an example so you can modify this to your needs and use a Docker image that suits your needs. This Docker image should contain all the tools needed to run your jobs, so the “jenkins/agent:latest” might not be fit for your setup. And do know, as the “templates” key is a list, you can add a lot more templates with a unique name, settings and Docker image. For the dockerHost.uri, you will see the usage of a environment variable “DOCKER_HOST“. This is an variable we use in docker-compose.yaml file and if we don’t provide one, the default unix:///var/run/docker.sock is used.

You can go to “manage“, “Systems configuration” and scroll all the way down until you will see “Cloud“. It provides a link and when clicking on it, you’ll get the page where you can configure the “Cloud” configuration. When you make changes, don’t forget to export the yaml file on the “Configuration as Code” page mentioned earlier.

Build and ship it

So far we have discussed some basics on how we get our configuration, so lets build a Docker image. During the rest of this blog post, I will assume you will have the same layout as my Github repository. So lets go to the directory where we have the “Dockerfile“, “plugins.txt“, “docker-compose.yaml” file and the “casc/” directory. Here we will run the docker build command, to build the new Docker image.

cd server
docker build -t jenkins-as-code . --pull

I named it ‘jenkins-as-code‘ which works locally fine and if you want to push it into a Docker registry, you should prefix it with the correct registry name. If you prefix it with a registry or you named it differently, don’t forget to update the docker-compose.yml file with your new name. The –pull is there so if you already have a “jenkins/jenkins:lts” Docker image, you will get the latest one.

I think it is build now, otherwise we will wait a minute before we continue.

sleep 60 🙂

Ok, the Docker image is build and we can start it. If you see the docker-compose.yaml file, you will notice 2 ‘services’.

  1. socat;
  2. jenkins (The one will just build and want to start).

But lets describe the ‘socat‘ service. The ‘socat‘ service is used to make sure that our docker.sock file from our host can be used with Jenkins for starting the agents. If we do this from the Jenkins container itself and not using this ‘socat‘ service, we will get permission denied errors and Jenkins can not start any new Docker container (I am doing on a Mac, I don’t think people running it on Linux hosts will have issues ). 

The Jenkins service has several environment variables set. Before we start everything, we will need to create an environment variable first that contains the content of a private SSH key. I have used the following command for that:

export EXPORTED_PASSWORD=$(cat ~/.ssh/wd_id_rsa)

So this EXPORTED_PASSWORD contains the private SSH key and this one will be used in Jenkins as the SSH_GIT_KEY credential on multiple places. Also worth to mention is the JENKINS_ADMIN_PASSWORD environment variable, this is what is says: The password for the Admin user, so if you want to use something else here is the moment to change it.

We will start it with the following command:

docker compose up -d

I prefer starting it in the background, so that is why I added the -d argument. Once it is booted we open our favourite browser and go to http://localhost:8080 you will see something like the following:

So that is it for now. We started our newly build Docker image containing Jenkins, with the plugins we need in our environment and our own configuration!

We will go into the “Seed All DSL jobs” job with the next part of the blogpost. So stay tuned! 🙂

2nd blog post you can find here.

Using Test Kitchen with Docker and serverspec to test Ansible roles

ansible_logo_black_square

Lets write some tests for our Ansible roles. When we are testing our roles, we can validate the quality of the role. With this blog item we are using the test kitchen framework and serverspec for the Ansible role: dj-wasabi.zabbix-agent.

With test kitchen we can start an vagrant box or an docker image and our Ansible role will be executed on this instance. There is an whole list of Test Kitchen drivers which can be found here. (If you have an more recent up2date list, please let me know and I update the link). When the Ansible role is installed, serverspec will be executed so we can verify if the installation and configuration is done correctly. Ideally you want to execute this every time when an change is done for the Role, so the best way is to do everything with Jenkins.

We will make use of the following tools:

  • Test Kitchen
  • docker
  • Serverspec

Installation of Jenkins is out of scope for this blog item, same as installation of docker. You’ll need to check these websites for installing Jenkins and  docker on your machine.

Before we even continue, we need to install test kitchen. We do this with the following command:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:02:01 - Thu Aug 20)
 (master) > gem install test-kitchen

This is easy, it only take around 10 seconds to install. We continue with the test kitchen setup by executing the next command:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:03:05 - Thu Aug 20)
 (master) > kitchen init --create-gemfile --driver=kitchen-docker
      create  .kitchen.yml
      create  chefignore
      create  test/integration/default
      create  .gitignore
      append  .gitignore
      append  .gitignore
      create  Gemfile
      append  Gemfile
      append  Gemfile
You must run `bundle install' to fetch any new gems.

The command creates some files and directories at default which we almost all will be using. We remove the chefignore file, as we don’t use chef in this case.

We update the Gemfile by adding the next line at the end of the file:

gem "kitchen-ansible"

Now we run the “bundle install” command, it will install the kitchen-docker and kitchen-ansible gems with their dependencies.

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:03:55 - Thu Aug 20)
 (master) > bundle install
Using multipart-post 2.0.0
Using faraday 0.9.1
Using highline 1.7.3
Using thor 0.19.1
Using librarian 0.1.2
Using librarian-ansible 1.0.6
Using mixlib-shellout 2.1.0
Using net-ssh 2.9.2
Using net-scp 1.2.1
Using safe_yaml 1.0.4
Using test-kitchen 1.4.2
Using kitchen-ansible 0.0.23
Using kitchen-docker 2.3.0
Using bundler 1.6.2
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:03:59 - Thu Aug 20)
 (master) > 

We can also set some version restrictions in this file, an example looks like this:

gem 'test-kitchen', '>= 1.4.0'
gem 'kitchen-docker', '>= 2.3.0'
gem 'kitchen-ansible'

With the above example, you’ll install test-kitchen with version 1.4.0 or higher and version 2.3.0 or higher for kitchen-docker. But for now, we use it without the versions.

We have an basic .kitchen.yml file, like this:

---
driver:
  name: docker

provisioner:
  name: chef_solo

platforms:
  - name: ubuntu-14.04
  - name: centos-7.1

suites:
  - name: default
    run_list:
    attributes:

We are changing the file, so it will look like this:

---
driver:
  name: docker
  provision_command: sed -i '/tsflags=nodocs/d' /etc/yum.conf

provisioner:
  name: ansible_playbook
#  ansible_yum_repo: "http://mirror.logol.ru/epel/6/x86_64/epel-release-6-8.noarch.rpm"
  hosts: localhost
#  requirements_path: requirements.yml

platforms:
  - name: centos-6.6

verifier:
  ruby_bindir: '/usr/bin'

suites:
  - name: default

What does it do:

We use the docker driver to run our playbook and tests. It will start an docker image and executes the playbook and after this, we execute serverspec tests to validate of everything should work as expected.

We use the “ansible_playbook” as provisioner. I have 2 lines commented in this .kitchen.yml file. My Ansible role doesn’t have any dependencies. With the “ansible_yum_repo” we can point it to for example the epel-release.rpm file. When the docker image is started, it will download and install this epel repository file so if the role needs some packages from Epel, it will succeed. Same as for the “requirements_path”. This is an yml file which can be used for downloading the role dependencies.

The Zabbix Agent role doesn’t have any dependencies, but for the Zabbix Server role, it has 3 dependencies. You could use it like this:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:07:22 - Thu Aug 20)
 (master) > cat requirements.yml
---
- src: geerlingguy.apache
- src: geerlingguy.mysql
- src: galaxyprojectdotorg.postgresql

There is only 1 platform in this test, centos-6.6. Same as for the “suites” there is only one. You can  specify more platforms and suits. The following example is the “suites” part of the dj-wasabi.zabbix-server role:

suites:
  - name: zabbix-server-mysql
    provisioner:
        name: ansible_playbook
        playbook: test/integration/zabbix-server-mysql.yml
  - name: zabbix-server-pgsql
    provisioner:
        name: ansible_playbook
        playbook: test/integration/zabbix-server-pgsql.yml

There are 2 suits with their own playbooks. In the above case, there is an playbook which will be executed with the “MySQL” as backend and there is an playbook with the “PostgreSQL” as backend. Both playbooks will be executed in their own docker instance. So it will start with for example the  ‘zabbix-server-mysql’ suits and when this is finished successfully, it continues with the suit ‘zabbix-server-pgsql’.

You can find on these pages some more information on the configuration of test kitchen, ansible and docker parts:

We now create our only playbook:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:07:25 - Thu Aug 20)
 (master) > vi test/integration/default.ym
--- 
- hosts: localhost
  roles:
    - role: ansible-zabbix-agent

We have configured the playbook which will be executed against the docker image on ‘localhost’.

But we are not there yet, we will need create some serverspec tests to. With these tests we validate if the execution of the Ansible role went successful.

First we have to create the following directory: test/integration/default/serverspec/localhost

We create the spec_helper.rb file:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:07:41 - Thu Aug 20)
 (master) > vi test/integration/default/serverspec/spec_helper.rb
require 'serverspec'
set :backend, :exec

We will create the serverspec file:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:08:22 - Thu Aug 20)
 (master) > vi test/integration/default/serverspec/localhost/ansible_zabbix_agent_spec.rb

The name is not that important, but it has to end with: _spec.rb. We add the following content to the file:

require 'serverspec'
require 'spec_helper'

describe 'Zabbix Agent Packages' do
    describe package('zabbix-agent') do
        it { should be_installed }
    end
end

describe 'Zabbix Agent Configuration' do
    describe file('/etc/zabbix/zabbix_agent.conf') do
        it { should be_file}
        it { should be_owned_by 'zabbix'}
        it { should be_grouped_into 'zabbix'}
    end
end

With the first “describe” we print ‘Zabbix Agent Packages’ and this is an block. When you have some rspec experience with Puppet, you would for example add the params like this: let(:params) { {:server => ‘192.168.1.1’} }
In our case, this is nothing more than printing the text.

Now we proceed with the 2nd describe: package(‘zabbix-agent’). All actions till the firsy “end” is related to this package. For the package we only have 1: it should be_installed. So, when this spec file is executed it will check if the ‘zabbix-agent’ package is installed. If not, you’ll see an error message.

We proceed with the 4th describe, file(‘/etc/zabbix/zabbix_agent.conf’). We have several checks for this file:

  • It Should be an file. (You could also choose for link, directory or even device)
  • The owner of the file needs to be user zabbix
  • The group of the file needs to be group zabbix

There are a lot of other options and checks to use in your spec file, but if we explain it here this is really gonna be an long post.

Only thing that we need to do is to run kitchen. So we execute the following command:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:09:13 - Thu Aug 20)
 (master) > kitchen test

If everything goes fine, you’ll see a lot of output. At the end of the run, serverspec will be executed:

       Zabbix Agent Packages
         Package "zabbix-agent"
           should be installed
      
       Zabbix Agent Configuration
         File "/etc/zabbix/zabbix_agent.conf"
           should be file (FAILED - 1)
           should be owned by "zabbix" (FAILED - 2)
           should be grouped into "zabbix" (FAILED - 3)
      
       Failures:
      
         1) Zabbix Agent Configuration File "/etc/zabbix/zabbix_agent.conf" should be file
            Failure/Error: it { should be_file}
              expected `File "/etc/zabbix/zabbix_agent.conf".file?` to return true, got false
              /bin/sh -c test\ -f\ /etc/zabbix/zabbix_agent.conf
             
            # /tmp/verifier/suites/serverspec/localhost/ansible_zabbix_agent_spec.rb:12:in `block (3 levels) in <top (required)>
      
         2) Zabbix Agent Configuration File "/etc/zabbix/zabbix_agent.conf" should be owned by "zabbix"
            Failure/Error: it { should be_owned_by 'zabbix'}
              expected `File "/etc/zabbix/zabbix_agent.conf".owned_by?("zabbix")` to return true, got false
              /bin/sh -c stat\ -c\ \%U\ /etc/zabbix/zabbix_agent.conf\ \|\ grep\ --\ \\\^zabbix\\\$
             
            # /tmp/verifier/suites/serverspec/localhost/ansible_zabbix_agent_spec.rb:13:in `block (3 levels) in <top (required)>'
      
         3) Zabbix Agent Configuration File "/etc/zabbix/zabbix_agent.conf" should be grouped into "zabbix"
            Failure/Error: it { should be_grouped_into 'zabbix'}
              expected `File "/etc/zabbix/zabbix_agent.conf".grouped_into?("zabbix")` to return true, got false
              /bin/sh -c stat\ -c\ \%G\ /etc/zabbix/zabbix_agent.conf\ \|\ grep\ --\ \\\^zabbix\\\$
             
            # /tmp/verifier/suites/serverspec/localhost/ansible_zabbix_agent_spec.rb:14:in `block (3 levels) in <top (required)>'
      
       Finished in 0.11823 seconds (files took 0.37248 seconds to load)
       4 examples, 3 failures

Whoops, it seems that my serverspec file was expecting something else. I made an typo, the file should be /etc/zabbix/zabbix_agentd.conf with an d! 🙂

We can see the docker image is created with the “kitchen list” command, the “Last Action” is “Set Up”:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:14:51- Thu Aug 20)
 (master) > kitchen list
Instance          Driver  Provisioner      Verifier  Transport  Last Action
default-centos-66  Docker  AnsiblePlaybook  Busser    Ssh        Set Up

There are 2 ways to proceed when we fix the typo:

  • We run ‘kitchen test’ again, but it will destroy our docker image and starts again from the start.
  • We run “kitchen verify’ and we only run the serverspec tests. (A lot quicker!)

We use the “kitchen verify” command:

wdijkerman@curiosity [ ~/git/ansible/ansible-zabbix-agent ] (13:15:12 - Thu Aug 20)
 (master) >  kitchen verify
-----> Starting Kitchen (v1.4.2)
-----> Verifying <default-centos-66>...
$$$$$$ Running legacy verify for 'Docker' Driver
       Preparing files for transfer
       Removing /tmp/verifier/suites/serverspec
       Transferring files to <default-centos-66>
-----> Running serverspec test suite
       /opt/rh/ruby193/root/usr/bin/ruby -I/tmp/verifier/suites/serverspec -I/tmp/verifier/gems/gems/rspec-support-3.3.0/lib:/tmp/verifier/gems/gems/rspec-core-3.3.2/lib /tmp/verifier/gems/bin/rspec --pattern /tmp/verifier/suites/serverspec/\*\*/\*_spec.rb --color --format documentation --default-path /tmp/verifier/suites/serverspec
      
       Zabbix Agent Packages
         Package "zabbix-agent"
           should be installed
      
       Zabbix Agent Configuration
         File "/etc/zabbix/zabbix_agentd.conf"
           should be file
           should be owned by "zabbix"
           should be grouped into "zabbix"
      
       Finished in 0.10183 seconds (files took 0.33263 seconds to load)
       4 examples, 0 failures
      
       Finished verifying <default-centos-66> (0m1.12s).
-----> Kitchen is finished. (0m1.17s)

As you see, we only run the serverspec tests and everything is ok. 4 examples with 0 failures.
We can continue with creating serverspec tests and rerun the “kitchen verify” command till we are satisfied.

In theory, you’ll create the tests first before creating the playbook. With practice ….

At the end when you are ready, you’ll create an Jenkins job which pulls for changes from your git repository. You’ll create an job which has 2 “Execute shells” steps:

  • Install bundler and test-kitchen and run bundle install
  • Execute kitchen test

Installation of the gem step:s:

gem install bundler --no-rdoc --no-ri
gem install test-kitchen --no-rdoc --no-ri
bundle install

And 2nd build step:

kitchen test

Whoot! 🙂