Early this week a co-worker asked if it was possible to access the Ansible command-line in a playbook. It seems that is not the case, in a “normal”, clean Ansible environment.
But in the meantime I was creating a playbook that served multiple purposes, stopping and starting services. These playbooks are completely the same, except for the start
and stop
keywords. Of course I could have solved that with a variabele, either hardcoded or as an extra variable on the commandline. But, where is the fun in that 
So the idea arose to let the playbook depend on its name and if it is called start
, start all services and if it is called stop
just stop them. Something along the line of $0
in shell or sys.argv[0]
in Python.
But this idea turned out to exactly the same idea as my co-worker had. They are very related, but it is just not in Ansible.
But, it is open source, so just fix it 
I started looking into an action plugin and after a lot of trail, error and Ansible source code reading I have fixed it.
The Ansible source-code contains a helper module called context
, that parses the command-line and consumes all options. But luckily, all that’s left are the playbook names and these are in context.CLIARGS['args']
. So if I take these I’m done. But when I’m doing this, I can also fix the co-workers problem, if I can access the ansible-playbook
parameters. And that turns out to be even simpler, just get sys.argv
in Python.
The result of all this craft is this Python script, an action plugin.
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import os
# Get Ansible context parser
from ansible import context
# ADT base class for our Ansible Action Plugin
from ansible.plugins.action import ActionBase
# Load the display handler to send logging to CLI or relevant display mechanism
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
# Get all Ansible commandline arguments and place these in the
# `facts` dictionairy as `ansible_facts['argv']`
class ActionModule(ActionBase):
# No file transfer needed
TRANSFERS_FILES = False
def run(self, tmp=None, task_vars=None):
'''Run action plugin'''
# All checks (file, exists, etc) are already done
# by the Ansible context
playbooks = list(map(os.path.abspath, list(context.CLIARGS['args'])))
# Create the result JSON blob
result = {
'changed': False,
'failed': False,
'skipped': False,
'msg': '',
'ansible_facts': {
'argv' : sys.argv,
'playbooks': playbooks,
}
}
return result
This results in two extra Ansible facts, called argv
and playbooks
, that can be used in your playbooks like this:
- name: lets go
hosts: localhost
become: false
connection: local
tasks:
- name: get commandline arguments
get_argv:
- debug:
msg:
- "{{ ansible_facts['argv'] | default('Nope' ) }}"
- "{{ ansible_facts['playbooks'] | default('Nope' ) }}"
To use the action plugin, create a directory called action_plugins
in your Ansible directory, or set the action_plugins
path in the ansible.cfg
file and place the get_argv
script in this directory.
Enjoy!
During my stay at CfgMgmtCamp I attended the presentation of Franziska Bühler (@bufrasch
) titled “Web Application Firewall - Friend of your DevOps pipeline?”. She talked about Web Application Firewalls (WAF) and the Core Rule Set (CRS) for owasp
Being into security and stuff like that myself, I decided I wanted to try to get the web application with ModSecurity up and running in my own test environment.
My test environment consists of a CentOS8 machine with NGINX and it turned out to be a little trickier than I thought.
The ModSecurity modules are standard available for the Apache webserver, so I could have used that. But I like a good challenge, so CentOS8 and NGINX it is.
Read more »
In our work environment we have role-based access for passwords (of course). But as we deploy all systems with Ansible, we could end up that someone with only deploy permission ends up with access to all passwords. It’s obvious that we don’t want that, so I started checking in to Ansible’s ability to have multiple vault passwords.
Ansible Vault IDs
Starting with Ansible 2.4 and above, vault IDs are supported.
Vault IDs help in encrypting different files with different passwords to be referenced inside a playbook. Prior to Ansible 2.4, only one vault password could be used in each Ansible run, forcing to encrypt all files using the same vault password.
First and foremost, Vault IDs need to be pre-created and referenced (best practice) inside your ansible.cfg
file
[defaults]
vault_identity_list = apple@prompt, pear@prompt
In this example there are two vault IDs, called apple
and pear
and in this configuration Ansible will prompt
for the needed passwords.
It’s also possible to supply the vault password files, like
[defaults]
vault_identity_list = apple@~/.vault_apple, pear@~/.vault_pear
Read more »
Since Ansible version 2.5 there is a lot of discussion and confusion about the loop syntax. There is also discussion if with_...:
will be replaced by loop:
deprecating the with_...
keywords. Even Ansibles documentation is not clear about this.
Should I use loop:
or with_...:
, in fact nobody really knows. What would the correct syntax be?
---
- name: Loops with with_ and lookup
hosts: localhost
connection: local
gather_facts: no
vars:
people:
- john
- paul
- mary
drinks:
- beer
- wine
- whisky
tasks:
- name: with nested
debug:
msg: "with_nested: item[0] is '{{ item[0] }}' and item[1] is '{{ item[1] }}'"
with_nested:
- "{{ people }}"
- "{{ drinks }}"
- name: nested and loop
debug:
msg: "nested_loop: item[0] is '{{ item[0] }}' and item[1] is '{{ item[1] }}'"
loop:
- "{{ people }}"
- "{{ drinks }}"
Read more »
I am a long time Ansible user and contributor (since 2012) and I have been struggling with a decent setup for a multi-environment case. I have been designing and re-designing a lot, until I came up with this design. And what a coincidence, a customer wanted a setup that was exactly this. So this concept is a real world setup, working in a production environment.
Did I get your attention? Read after the break, but take your time. it is a long read.
Read more »
Some time ago I created a playbook to show the content of a rendered template. When you keep digging in the Ansible documentation, you suddenly stumble over the template
lookup-plugin. And then it turns out that my playbook is a bit clumsy.
A nicer and shorter way to do it:
---
#
# This playbook renders a template and shows the results
# Run this playbook with:
#
# ansible-playbook -e templ=<name of the template> template_test.yml
#
- hosts: localhost
become: false
connection: local
tasks:
- fail:
msg: "Bailing out. The play requires a template name (templ=...)"
when: templ is undefined
- name: show templating results
debug:
msg: "{{ lookup('template', templ) }}"
A couple of days ago a client asked me if I could solve the following problem:
They have a large number of web servers, all running a plethora of PHP versions. These machines are locally managed with DirectAdmin, which manages the PHP configuration files as well. They are also running Ansible for all kind of configuration tasks. What they want is a simple playbook that ensures a certain line in all PHP ini
files for all PHP versions on all webservers.
All the PHP directories match the pattern /etc/php[0-9][0-9].d
.
Thinking about this, I came up with this solution (took me some time, though) 
---
- name: find all ini files in all /etc/php directories
hosts: webservers
user: ansible
become: True
become_user: root
tasks:
- name: get php directories
find:
file_type: directory
paths:
- /etc
patterns:
- php[0-9][0-9].d
register: dirs
- name: get files in php directories
find:
paths:
- "{{ item.path }}"
patterns:
- "*.ini"
loop: "{{ dirs.files }}"
register: phpfiles
- name: show all found files
debug:
msg: "Files is {{ item.1.path }}"
with_subelements:
- "{{ phpfiles.results }}"
- files
The part with the with_subelements
did the trick. Of course this line can be written as:
loop: "{{ query('subelements', phpfiles.results, files) }}"
As the new GDPR finds its way all over Europe I decided to have a closer look at my website. I have been using the Disqus comment system for some time now, but hardly ever someone really takes the time to comment.
As the Disqus systems uses a lot of Javascript and cookies, I decided it was time to get rid of these tools and make my site fly, again.
At Disqus: So long and thanks for all the fish.
During my last Ansible training the students needed to create some Ansible templates for them selfs. As I do not want to run a testing template against some, or all, machines under Ansible control I created a small Ansible playbook to test templates.
Read more »
Yesterday I removed a simple package from my Fedora 23 machine and after that I got the message
error: Failed to initialize NSS library
WTF??????
Searching the interwebs I found out I wasn’t the first, and probably not the last, to run into this problem.
It seems that, one way or another, the DNF
package doesn’t know about the dependency it has on SQLite. So, when a package removal requests to remove SQLite, DNF removes it without questions. Ans thus break itself.
But how to fix this? DNF doesn’t work, but RPM doesn’t either, so there is no way to reinstall the SQLite packages.
Tinkering and probing I found this solution:
#!/bin/bash
url="http://ftp.nluug.nl/os/Linux/distr/fedora/linux/updates/23/x86_64/s/"
ver="3.11.0-3"
wget ${url}/sqlite-${ver}.fc23.x86_64.rpm
wget ${url}/sqlite-libs-${ver}.fc23.x86_64.rpm
rpm2cpio sqlite-${ver}.fc23.x86_64.rpm | cpio -idmv
rpm2cpio sqlite-libs-${ver}.fc23.x86_64.rpm | cpio -idmv
cp -Rp usr /
dnf --best --allowerasing install sqlite.x86_64
This downloads the SQLite package and SQLite library packages, extracts them and copies the missing files to their /usr
destination. After doing that, DNF and RPM get working again. It could be that I downloaded an older version of the SQLite stuff, so to make sure I have a current version I reinstall SQLite again.
Maybe a good idea to fix that in DNF!