Deploying on EC2 with Ansible
by Sebastien Mirolo on Sat, 18 Oct 2014Ansible is a great piece of software to write IT automation scripts. The fact that Ansible is written in Python makes it even sweeter for us.
If you are looking for a good and short introduction to Ansible, don't look further that Robert Reiz's post on Ansible.
Here we will delves into setting up and running Ansible on a local laptop to manage a software stack on AWS.
$ pip install -U ansible
Ansible ec2 module depends on boto. If you have setup your ~/.boto file as previously, Ansible will work.
Ansible inventory
Ansible retrieves hosts and group of hosts to run playbook on from an inventory database, which in most cases is a simple text file in /etc/ansible/hosts. So first we will create a hosts file that contains the various hosts Ansible will be responsible for.
Everything starts from localhost because the commands to create AWS resources are run on the local system.
$ cat $VIRTUAL_ENV/etc/ansible/hosts [local] localhost
Here Ansible is a little odd. Even though we installed Ansible in a virtualenv and the first playbook runs on localhost, Ansible will jump out of the virtualenv and insist to run /usr/bin/python. To solve that, we need to force the *ansible_python_interpreter* variable as follow:
$ cat $VIRTUAL_ENV/etc/ansible/hosts [local] localhost ansible_python_interpreter=*VIRTUAL_ENV*/bin/python
The really cool thing about Ansible is that the inventory database as specified by the -i can be a python script that outputs JSON-formatted text. That's how we are able to run specific command in an environment as dynamic as AWS later on.
Creating a security group and one EC2 instance
$ cat group_vars/all --- # Variables listed here are applicable to all host groups key_name: ec2-prod-key aws_region: us-west-2 ami_id: ami-cc8de6fc instance_type: t1.micro $ cat basic-create.yml --- # Basic provisioning example - name: Create AWS resources hosts: localhost connection: local gather_facts: False tasks: - name: Create security group module: ec2_group name: *my-security-group* description: "A Security group" region: "{{aws_region}}" rules: - proto: tcp type: ssh from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all type: all cidr_ip: 0.0.0.0/0 register: basic_firewall - name: create an EC2 instance local_action: module: ec2 key_name: "{{key_name}}" region: "{{aws_region}}" group_id: "{{basic_firewall.group_id}}" instance_type: "{{instance_type}}" image: "{{ami_id}}" wait: yes register: basic_ec2 $ ansible-playbook -i $VIRTUAL_ENV/etc/ansible/hosts -vvvv basic-create.yml ... "public_ip": "PUBLIC_IP_3" ... $ ssh -i ~/.ssh/ec2-prod-key fedora@PUBLIC_IP_3
Dynamic inventory
So far so good. We have an EC2 instance up and running. It just remains to find out how to reliably address it to deploy necessary packages and configuration. That is where Dynamic Inventory comes into play.
Now that might be because I used Ansible version 1.7.2 (from pip) or I haven't figured how to install plugins into Ansible, but here after a little bit trials and errors, I got the following sequence of commands to work:
$ pushd $VIRTUAL_ENV/etc/ansible $ wget https://raw.githubusercontent.com/ansible/ansible/devel/plugins/inventory/ec2.ini $ popd $ export EC2_INI_PATH=$VIRTUAL_ENV/etc/ansible/ec2.ini $ mkdir -p plugins/inventory $ cd plugins/inventory $ wget https://raw.githubusercontent.com/ansible/ansible/devel/plugins/inventory/ec2.py # The script calls itself at some point. $ chmod +x ./ec2.py $ ./ec2.py --list
With a way to pick up dynamic hosts, we write a playbook to delete the previously created instances and security group.
$ cat basic-delete.yml --- # Basic decommissioning example - name: Delete devel EC2 instances hosts: security_group_*my-security-group* # <-- MAGIC IS HERE! connection: local gather_facts: False tasks: - name: Terminate {{ec2_id}} instance in {{aws_region}} local_action: module: ec2 state: 'absent' region: '{{aws_region}}' instance_ids: '{{ec2_id}}' - name: Delete '*my-security-group*' security group hosts: localhost connection: local gather_facts: False tasks: - name: Take '*my-security-group*' security group down local_action: module: ec2_group name: *my-security-group* description: "A Security group" region: "{{aws_region}}" state: 'absent'
And execute the playbook as such
$ ansible-playbook -i plugins/inventory/ec2.py basic-delete.yml
Conclusion
Et voila! It is possible provision and decommission our basic cloud on AWS with two Ansible commands.
$ ansible-playbook -i $VIRTUAL_ENV/etc/ansible/hosts basic-create.yml $ ansible-playbook -i plugins/inventory/ec2.py basic-delete.yml
As the Ansible documentation writes, the ec2 inventory plugin allows to to select AWS hosts by region, security group, etc. it is now straightforward to manage machines with dynamic hostname.
p.s. As usual, beware of caching. Thankfully the ec2.py provides a --refresh-cache command line argument.
More to read
If you are looking for more Ansible posts, Organizing Ansible Playbooks is a good read. For more AWS related content, you might like to read PostgreSQL, encrypted EBS volume and Key Management Service next.
More technical posts are also available on the DjaoDjin blog, as well as business lessons we learned running a SaaS application hosting platform.
Side Note
There is no magic here. Ansible runs commands very similar to the ones we would directly from a python interpreter. Here to boot a Fedora 20 EC2 instance in AWS us-west region:
$ python >gt;gt; import boto, boto.ec2 >gt;gt; conn = boto.ec2.connect_to_region('us-west-2') >gt;gt; reservation = conn.run_instances( ... image_id='ami-cc8de6fc', key_name='ec2-prod-key', ... security_groups=['*my-security-group*']) # ... wait a few minutes here for the public dns to come up ... >gt;gt; reservations = conn.get_all_instances( ... instance_ids=[reservation.instances[0].id]) >gt;gt; print reservations[0].instances[0].public_dns_name >gt;gt; print reservations[0].instances[0].ip_address PUBLIC_IP_2 >gt;gt; exit $ ssh -i ~/.ssh/ec2-prod-key fedora@PUBLIC_IP_2
When Ansible returns the following error message,
msg: Either region or ec2_url must be specified
it usually means the region was not specified in the task description. To fix it, just add a region field like so:
- name: Terminate {{ec2_id}} instance in {{aws_region}} local_action: module: ec2 state: 'absent' + region: '{{aws_region}}' instance_ids: '{{ec2_id}}'