PostgreSQL, encrypted EBS volume and Key Management Service
by Sebastien Mirolo on Wed, 27 May 2015A little while ago, I wrote about installing postgres on an encrypted volume. The approach is very secure and uses standard Linux tools but it has two drawbacks.
First, using a non-default directory for the mount point (/mnt/encvol/var) instead of the default postgres directory means a lot of SELinux tweaking.
Second, business-wise, it is easier to pass an audit when someone sees encrypted in your AWS dashboard rather than having to explain how the data is actually written to disk encrypted at the kernel level.
Thus here we will use the AWS Key Infrastructure to encrypt the EBS volume, of course deploying with Ansible.
AWS Key Management Service
If you are looking for Key Management Service (KMS) on the AWS web dashboard, it is under IAM > Encryption Keys. From the command line:
$ pip list | grep aws awscli (1.7.20) $ aws kms help o encrypt o decrypt o generate-data-key o generate-data-key-without-plaintext
Access to AWS through Ansible modules is still pretty new, especially access to KMS (Key Management Service), so we need to install the extra modules from GitHub directly here.
$ pip list | grep ansible ansible (1.9.0.1) $ git clone https://github.com/ansible/ansible-modules-extras.git $ cp -r ansible-modules-extras/cloud/amazon \ virtualenv/lib/python2.7/site-packages/ansible/modules/extras/cloud
Creating a master key
Adding a kms_region_name to the boto config does not work.
$ cat ~/.boto [Boto] +kms_region_name = us-west-2
We need to rely on aws configure here:
$ aws configure $ ls -la ~/.aws
Of course, we will have to make sure our IAM role is authorized to perform kms:CreateKey operations. Since creating new keys is not something we will do often and permissions we don't want the IAM role to have long lasting access to, it just be simpler to use the web dashboard in this case - just this one time.
Creating an encrypted EBS Volume
At first, I tried to use the Ansible EC2 module and create the encrypted volume at the same time as the instance (and the unencrypted root volume since AWS does not support encrypted root volume yet.) but that did not pan out well.
Another mistake I made early on was to not fully understand the different volume types (and the associated pricing). With a non trivial database to import, here the numbers I got on my laptop and running on a m1.small with a default EBS volume setup.
MacBook Pro | m1.small / ebs | |
---|---|---|
real | 12m7.967s | 40m37.707s |
user | 11m24.508s | 35m57.346s |
sys | 0m5.142s | 0m30.011s |
msg: InvalidParameterValue: Iops to volume size ratio of 50.000000 is too high; maximum is 30
So we create our encrypted volume with Ansible ec2_vol module, then a ec2 instance and attach the volume to it. Simple? Well, apart from the usual stumble blocks, add one more:
- Both the ec2 volume and instance must be created in the same zone.
- A role cannot be associated to an instance after it was created.
- Encrypted volumes do not work with all instance types. (new)
Though it is possible to create a volume with a specified key on the command line with the aws cli tools,
$ aws ec2 create-volume --availability-zone us-west-2b --size 20 \ --kms-key-id key_name
the Ansible ec2_vol module has only an encrypted parameter that toggle to yes or no. It does not seem possible to specify the key to use at this time, thus the resulting API call to AWS will end up creating a new master key labeled "aws/ebs" and use it (instead of the "default" master key we previously created).
$ cat aws-create-dbtier.yml - name: Create SQL DB Tier hosts: localhost connection: local gather_facts: False tasks: - name: create ec2 instances local_action: module: ec2 key_name: "{{key_name}}" group: "{{tag_prefix}}dbtier" instance_profile_name: "{{tag_prefix}}dbtier-profile" instance_type: m3.medium image: "{{ami_id}}" region: "{{aws_region}}" zone: "{{aws_zone}}" wait: yes register: db_server - name: create ec2 encrypted volume local_action: module: ec2_vol device_name: /dev/sdf encrypted: yes instance: "{{db_server.instance_ids}}" region: "{{aws_region}}" zone: "{{aws_zone}}" volume_size: 20 volume_type: gp2 $ ansible-playbook -vvvv -i $VIRTUAL_ENV/etc/ansible/hosts aws-create-dbtier.yml
Setting up Postgres
Finally we have an ec2 instance and attached to it an encrypted ebs volume. First thing is to format the disk.
$ ssh -i ~/.ssh/ec2-key fedora@ec2-instance-public-dns $ ls -ld /dev/* | grep 'df' brw-rw----. 1 root disk 202, 80 Apr 9 19:24 /dev/xvdf $ sudo mkfs.ext4 -m 0 /dev/xvdf $ sudo mkdir -p /mnt/encvol $ sudo mount /dev/xvdf /mnt/encvol
Because we are not interested to figure out how to tweak SELinux for hours this time around, we will copy the content of /var to the encrypted volume and then mount the volume under /var. This is straightforward Linux stuff here. Just skip the single user mode step (sudo init 1) or it will kill sshd and you will have to reboot the instance.
$ cd /var $ sudo cp -ax * /mnt/encvol $ cd / $ sudo mv var var.old $ sudo mkdir var $ sudo umount /dev/xvdf $ sudo mount /dev/xvdf /var $ df -h # mount the encrypted volume as /var at boot. $ diff -u prev /etc/fstab /dev/xvdf /var ext4 defaults 0 0
Afterwards, the steps to install PostgreSQL are usual for a RedHat-based distribution.
$ sudo yum update $ sudo yum install postgresql-server postgresql $ sudo /usr/bin/postgresql-setup initdb
Accept remote connections
The Web service will run on a separate instance so we need to configure PostgreSQL to accept remote connections. PostgreSQL Authentication Methods is a worthwhile read to do just that.
$ diff -u prev /etc/hosts +ec2-private-ip sqldb.internal $ diff -u prev /var/lib/pgsql/data/pg_ident.conf # MAPNAME SYSTEM-USERNAME PG-USERNAME +mymap postgres postgres +mymap /^(.*)$ dbuser $ diff -u prev /var/lib/pgsql/data/pg_hba.conf # TYPE DATABASE USER ADDRESS METHOD local all all peer map=mymap # IPv4 local connections: -host all all 127.0.0.1/32 ident +host all dbuser 172.31.0.0/16 md5 # IPv6 local connections: -host all all ::1/128 ident +host all dbuser ::1/128 md5 $ diff -u prev /var/lib/pgsql/data/postgresql.conf -#listen_addresses = 'localhost' # what IP address(es) to listen on; +listen_addresses = 'sqldb.internal' # listen on Private IP address $ sudo systemctl enable postgresql $ sudo systemctl start postgresql
Next step if you are looking to encrypt everything (as you should) is to read Secure TCP/IP Connections with SSL. Et voila!
More to read
If you are interested in Ansible, you might want to read Deploying on EC2 with Ansible or Organizing Ansible Playbooks next.
More technical posts are also available on the DjaoDjin blog, as well as business lessons we learned running a SaaS application hosting platform.