I’d like to share in this post a quick setup using Docker containers for Ansible testing.

I wanted to create a small lab for my Ansible scripts testing. The natural option that came to my mind was to create some virtual machines with Virtualbox using Vagrant in order to automate that process. However as I’m using a Macbook with a chip Apple M1, Virtualbox is not supported on that hardware. I then decided to explore using containers instead of virtual machines.

The first step is to look after an existing docker image that would already have Ansible and Openssh installed. There is no need to reinvente the wheel in the age of space travelling so my colleague Jean-Philippe Clapot (our Docker guru!) quickly spotted the perfect image for my need: https://hub.docker.com/r/jcpowermac/alpine-ansible-ssh

The launch

As docker and Docker desktop are already installed on my laptop I just have to pull that image:

% docker pull jcpowermac/alpine-ansible-ssh

Then I can run some containers:

% docker run --name=controller --platform linux/amd64 -d jcpowermac/alpine-ansible-ssh
% docker run --name=target1 --platform linux/amd64 -d jcpowermac/alpine-ansible-ssh
% docker run --name=target2 --platform linux/amd64 -d jcpowermac/alpine-ansible-ssh

The idea for now is to have one container named “controller” on which I’ll write and run my Ansible scripts. Then, two target containers named “target1” and “target2” will be the targeting hosts of my scripts. Using containers instead of Virtual Machine give me plenty of spare resources to run much more of those bad boys when doing more advanced Ansible testing. Note the parameter --platform linux/amd64 which specify the platform to use on my Apple M1 chip. Without this parameter you get a warning but the container is properly created anyway.

All three containers are now up and running:

% docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED          STATUS          PORTS     NAMES
786309da3987   jcpowermac/alpine-ansible-ssh   "/bin/ash -c '/usr/s…"   5 seconds ago    Up 4 seconds              target2
abe685b1f1ab   jcpowermac/alpine-ansible-ssh   "/bin/ash -c '/usr/s…"   28 seconds ago   Up 27 seconds             target1
e063d4e9267d   jcpowermac/alpine-ansible-ssh   "/bin/ash -c '/usr/s…"   44 seconds ago   Up 43 seconds             controller

The network

As this setup is only for temporary tests, I didn’t create a dedicated network for those containers. They all use the default bridge network in the default range 172.17.0.0/16.

The first container “controller” get the first free IP Address in this range which is 172.17.0.2/16 (172.17.0.1/16 is taken by the bridge interface. Each target will get the next free IP Address. We can check the IP Addresses assigned to the containers with the one-liner below:

% for i in $(docker ps|awk '{print $1}'|tail -n +2); do docker exec $i ip a|grep 172.17;done
    inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0

We can now connect to the controller container and check the ansible version installed in this image:

% docker exec -it controller /bin/sh
/ # id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)

/ # ansible --version
ansible 2.7.2
  config file = None
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.15 (default, Aug 16 2018, 14:17:09) [GCC 6.4.0]

This Docker image comes with the user ansible already created so let’s use it:

/ # su ansible
~/projects $ pwd
/home/ansible/projects
~/projects $ id
uid=1000(ansible) gid=1000(ansible) groups=1000(ansible)
~/projects $

We can first check the connectivity:

~/projects $ ssh 172.17.0.3
The authenticity of host '172.17.0.3 (172.17.0.3)' can't be established.
ECDSA key fingerprint is SHA256:ntBTgrAxi9bUSIb47U31BFzD4rE5ktFnwRxztqXFICE.
Are you sure you want to continue connecting (yes/no)? yes

~/projects $ ssh 172.17.0.4
The authenticity of host '172.17.0.4 (172.17.0.4)' can't be established.
ECDSA key fingerprint is SHA256:ntBTgrAxi9bUSIb47U31BFzD4rE5ktFnwRxztqXFICE.
Are you sure you want to continue connecting (yes/no)? yes

~/projects $

From the controller I can ssh to target1 and target2, my setup is now completed and I can start playing with Ansible.

The Ansible test

The first step is to create an inventory file in the home folder:

~/projects $ cat <<EOF > inventory.txt
target1 ansible_host=172.17.0.3
target2 ansible_host=172.17.0.4
EOF

We can then run a basic Ansible command against our both target:

~/projects $ ansible target* -m ping -i inventory.txt
target2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
target1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

All is working as expected, our test platform is now ready for more advanced Ansible testing.

Learn more on Ansible with our Training course: https://www.dbi-services.com/en/courses/ansible-basics/