Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
Get the example code:
git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
than you can just start to build your Test Infrastructure with the simple callvagrant up
cleanup is also easy- stop the vagrant machines and deletes all traces:
vagrant destroy
How does this work ?
vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.
Vagrant configuration
All the configuration for Vagrant is in one file called Vagrantfile
I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
config.vm.box = "boxcutter/centos72"
If you start vagrant up the first time it will download the vagrant box
$ vagrant up
…
Bringing machine 'dbserver1' up with 'virtualbox' provider...
Bringing machine 'dbserver2' up with 'virtualbox' provider...
==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
dbserver1: Box Provider: virtualbox
dbserver1: Box Version: >= 0
==> dbserver1: Loading metadata for box 'boxcutter/centos72'
dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
==> dbserver1: Importing base box 'boxcutter/centos72'...
I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
you can add this plugin to vagrant with:
The corresponding part in the Vagrantfile will look like this:
vagrant plugin install vagrant-hostmanager
config.hostmanager.enabled = true
config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
config.vm.hostname = “dbserver1”
config.vm.network "private_network", ip: "192.168.56.31"
ssh Configuration
The Vagrant box comes already with ssh key configuration and– if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
config.ssh.insert_key = false # Use the same insecure key provided from box for each machine
After bringing up the virtual servers you can display the ssh settings:
vagrant ssh-config
The important lines from the output are:
You should be able to reach your guest server without password with user vagrant
Host dbserver1
HostName 127.0.0.1
User vagrant
Port 2222
IdentityFile /home/user/.vagrant.d/insecure_private_key
Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant)
vagrant ssh dbserver1 su - oracle
or directly connect with ssh ssh [email protected] -p 2222 -i /home/user/.vagrant.d/insecure_private_key
Virtual Disks
I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ]
v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ]
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"]
v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"]
end # create disks only once
Provisioning with Ansible
At the end of the Vagrantfile provisioning with Ansible is called.
To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.
N = 2
(1..N).each do |i| # do for each server i
...
if i == N
config.vm.provision "ansible" do |ansible| # vm.provisioning
#ansible.verbose = "v"
ansible.playbook = "oracle-db.yml"
ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
ansible.limit = 'all'
end # end vm.provisioning
end
end
Ansible Inventory
The Ansible Inventory is a collection of guest hosts against which Ansible will work.
You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
Example Ansible Inventory:
# Generated by Vagrant
You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.
dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
[dbserver]
dbserver1
dbserver2
Ansible configuration
tell Ansible where to find the inventory in the ansible.cfg.
nocows=1
hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking = False
Ansible Variables
In this example I have put the general variables for all servers containing an Oracle Database into this file:
group_vars/dbserver
The more specific variables including variables used to create the database like the database name, character set
can be adapted individual for each server:
host_vars/dbserver1,host_vars/dbserver2
Ansible Playbook
The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
ansible-playbook oracle-db.yml
This is the whole playbook, to configure the servers and install Oracle Databases:
$cat oracle-db.yml
---
- name: Configure Oracle Linux 7 with Oracle Database 12c
hosts: dbserver
become: True
vars_files:
# User Passwords hashed are stored here:
- secrets.yml
roles:
- role: disk_layout
- role: linux_oracle
- role: oracle_sw_install
become_user: '{{ oracle_user }}'
- role: oracle_db_create
become_user: '{{ oracle_user }}'
Ansible roles
To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro
# example to create the directory structure for the role "my_role_name"
ansible-galaxy init my_role_name
# default Ansible role directory structure
roles/
my_role_name/
defaults/
files/
handlers/
meta/
tasks/
templates/
vars/
Ansible Modules
Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
You can also get information about all the Ansible modules from command line:
# list all modules
ansible-doc --list
# example to show documentation about the Ansible module "copy"
ansible-doc copy
One Example:
To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:
Snipped from the Jinja2 template db_install.rsp.j2
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME={{ oracle_home }}
Snipped from roles/oracle_sw_install/tasks/main.yml
- name: Gerenerate the response file for software only installation
template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp
Ansible Adhoc Commands – Some Use Cases
Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
ansible localhost -m setup
Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:
$ ansible all -m ping
dbserver2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
dbserver1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
File transfer – spread a file to all servers in the group dbserver
ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"
Conclusion
With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.
Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
Do not use this example for productive environments.