Oliver Helm

Adding swap space on an EC2 instance

When I first started using AWS ec2 instances to host small sites (where the database and web server where on the same box) i was often surprised by how often MySQL kept falling over – WordPress in particular would show the ‘Error establishing a database connection’ message frequently and the MySQL service would need a kick.  It was particularly odd given that almost identical boxes hosted on Rackspace where coping fine.  After a little rummaging around it became apparent that many EC2 instances don’t come with any swap space by default.

Swap space  (or a swap file) is a space on the hard drive that is used as an extension of the RAM assigned to the device.  Essentially when the device runs out of physical RAM it can use the swap space assigned as an extension / overflow.  It’s much slower than physical RAM so is only advisable as an overflow.

This means that when the memory reaches capacity, there’s no where to go, and MySQL don’t like that.  The solution is simply to add swap space.   On the default AWS AMI (CentOS) you add swap space as follows (as root or sudo):

/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1

Where 1024 assigns 1GB of swap space.  Increase that as you please.

To make sure this is maintained on reboot too add the following line to your fstab (/etc/fstab):
/var/swap.1 swap swap defaults 0 0

You can check this has worked using the ‘free’ command which should now list swap space below the physical memory
free -m

Many thanks to all the Stack Overflow posters who jogged my memory on the syntax of this!

Compiling PHP with SYBASE support

Over at The Constant Media we’ve been working on a project that relies on using PHP to connect to a customers SYBASE database. Essentially they have a proprietary system in place that manages many key aspects of their business and their website, as a key sales channel, needs to interact with it.

If you are pretty confident with linux and don't want to work your way though this there is a command list gist here.

SYBASE is not something we had come across before, and whilst the actual PHP code it’s self isn’t all that tricky to implement, getting SYBASE and it’s dependencies compiled into PHP can be a little fiddly. Below is a step-by-step of how we did this. It’s based around CentOS (our server OS of choice) but should be fairly transferable.

Before I go to far into this i should point out that much of the credit due goes to @andrew_heron, my business partner in The Constant Media, who did much of the hard work and leg work on this project and the initial server builds. I really only stepped in when we came to build the production and staging servers and made the server a little more production ready.

The first thing to point out is that to do this you need to compile PHP from the source code and cannot (to my knowledge) do this though yum. We are running this on a CentOS 6 box which was vanilla at the point of install. If you have PHP installed at the moment (though rpm’s or yum) scrub it off before wading in.

Frist, lets update the box and install a few pre-requisites. Because we are compiling from source we are going to need a few dev libraries too (devels). As root or sudo if you prefer:

$ yum install gcc openssl-devel pcre-devel libxml2-devel httpd httpd-devel mod_ssl libcurl-devel libpng-devel
$ yum groupinstall "Development Tools"

Installing FreeTDS
SYBASE has a dependency on FreeRDS to so we need to start by downloading, configuring and installing this. All source files are going to be stored in /usr/src throughout.

Note: Some of these version may be out of date when you get to it so you may need to adapt things a little to accommodate this.

Download and unpack FreeTDS:
$ cd /usr/src
$ wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-patched.tar.gz
$ tar -zxvf freetds-*
$ rm -rf freetds-*.tar.gz
$ cd freetds-*

Next we configure FreeTDS, specifying the path where we whish to install it. Once configured we can ‘make’ and install the package. Watch for errors at each stage and rectify any that occur before proceeding.

$ ./configure --prefix=/usr/local/freetds
$ make
$ make install

Installing APR & APR-Util
The next dependencies needed are APR and APR-Util.

APR
$ cd /usr/src
$ wget http://apache.mirror.anlx.net/apr/apr-1.5.2.tar.gz
$ tar -zxvf apr-*
$ rm -rf apr-*.tar.gz
$ cd apr-*

Again configure and install, making sure to fix any errors as you go:
$ ./configure
$ make
$ make install

APR-Util
$ cd /usr/src
$ wget http://apache.mirror.anlx.net//apr/apr-util-1.5.4.tar.gz
$ tar -zxvf apr-util-1.5.4.tar.gz
$ rm -rf apr-util-1.5.4.tar.gz
$ cd apr-util-1.5.4/

Configure and install.
$ ./configure --with-apr=/usr/local/apr
$ make
$ make install

If you have any other dependencies outside the norm then now’s the time to go install them.

Downloading and installing PHP
We’ve chosen to run with PHP 5.6 as we feel it will be secure and stable enough in our environment. If you need an earlier or later version the principles should be the same, just insert it into the wget below.

$ cd /usr/src
$ wget http://uk1.php.net/get/php-5.6.8.tar.gz
$ mv mirror php-5.6.8.tar.gz
$ tar -zxvf php-5.6.8.tar.gz
$ rm -rf php-5.6.8.tar.gz
$ cd php-5.6.8

The next command is the most important, and most problematic stage of the install. We have chosen some pretty standard includes (php-mysql, php-mysqli, php-mbstring, php-pdo, php-gd and php-curl) but if you need more just add them in.

For the SYSBASE extension to function you need to specify the path to your FreeTDS install. This is the path you specified above in the –prefix flag. As the Apache install is pretty standard we’ve chosen to install this via yum (to keep things simple). For this to work though you need to make sure that httpd-devel is installed (the httpd dev library installed above) and know the path to apxs which comes as part of that library. By standard apxs is located at /usr/sbin/apxs but if by chance it’s not there you should be able to find it with a find ‘find / -name ‘apxs”. If that doesn’t return the path then you probably haven’t installed httpd-devel correctly.

The last peramiter we’ve added is –with-config-file-path. This is where your php.ini file will live – we’ve specified this as /etc/php.ini as it’s a fairly standard and thus makes it easier for others to find.

Once you have adjusted this command to your needs keep a record of it somewhere safe as you will need to re-run it should you ever need to add more modules or upgrade PHP.

$ ./configure --with-sybase_ct=/usr/local/freetds --with-apxs2=/usr/sbin/apxs --with-mysql --with-mysqli --enable-mbstring --with-pdo-mysql --with-openssl --with-curl --with-gd --with-config-file-path=/etc/php.ini

Assuming no errors where returned by the command above you can now make and install PHP:
$ make
$ make test
$ make install

If you need a template php.ini you can normally find one in/usr/src/php-yourversion/php.ini-development. Failing that just grab one off the web.

PHP should now be installed and working. Running php -v will confirm this for you and you can check the version installed matches your expectations.

Linking PHP & Apache
Lastly, you’ll want to let apache know about your PHP install. To do this just add the following into /etc/httpd/conf/httpd.conf

#
#   Enable PHP
#
AddType  application/x-httpd-php         .php

It’s also worth adding index.php to the list of accepted index files.

Restart httpd and you should be good to go. Place a phpinfo() file on your server, scan down the list and you should see sybase listed.

$ service httpd start

Adding additional Elastic IP’s to a single EC2 instance

When hosting websites on EC2 instances it’s pretty common to need to point multiple Elastic IPs to a single EC2 instance, normally to allow the use of multiple SSL certificates.  This is pretty easy to do, but a little confusing at fist if your not used to the sysadmin world.

It’s important to understand that each NIC (network interface) can only have a single elastic IP address bound to it.  Most instances are not launched with spare NICs attached and as such you will have to create an attach and additional NIC to which you can associate (point) the additional elastic IP.

Note: The number of NIC's you can attached to an EC2 instance is limited by the size of the instance.  For example a micro instance can at the point of writing only support two NICs (there for limiting you to using only two elastic IP's).  You can get around this by using a load balancer.

Creating & attaching an additional Network Interface

  1. First log into your AWS account and pull up the EC2 Dashboard.
  2. From there select ‘Network Interfaces’ under Network & Security tab on the left hand menu and click ‘Create Network Interface’ (the big blue button at the top).
  3. A pop up will appear and you can name the new interface something meaningful to you.  Then add it to the subnet that the EC2 server is currently in.  (If your not sure which subnet this is you can find it by looking at the instance details on the ‘Instances’ page).
  4. Once you have selected a subnet the security groups available on that subnet will be listed.  Select the groups to all through the traffic you need (you can always add more / change this later if you need too).
  5. If you want to manually assign the private IP address you can do so at this stage, but I tend to leave it blank which will auto assign an address for you out of the VPC’s range.

Once you have created the NIC pull up its details from the list and make a note of the Primary Private IP and the Network Interface ID. The Primary private IP is where your EC2 instance will see the traffic as originating from.  If you need to set up SSL certificates for example it is this private IP that you will listen for / specify in the config file, not the elastic IP address.

Next you need to attached this new NIC to your EC2 instance. To do this select it from the list and chose the ‘Attach’ button at the top of the page. Select the instance you want to attache the NIC to from the dropdown list and click ‘Attach’. At this point the NIC will be attached to the instance and be ready to receive / send traffic.

Associating an Elastic IP

You can now head over to the Elastic IPs section (on the left nav). If you have a spare IP listed you can use this or alternatively you can click ‘Allocate New Address’ to create an additional one.  Select the elastic IP from the list and using the Network Interface ID you noted earlier (when you created the NIC) find the interface in the network interface field and hit ‘Associate’.

Your done! The elastic IP will now pass traffic to your instance, and the instance will identify this traffic as coming from the private IP you noted earlier.

 

Note: Whilst the number of NIC's you can use are limited by the instance's size, Elastic IPs are limited per account (by default to five). This limit can be increased by raising a support ticket so long as you can justify the need.

Backing up to Amazon S3

There’s a great (and free) command line tool set called s3cmd that makes it simple to push and pull files to AWS S3 buckets, and is easy to script with.  We use it for cheep, reliable, offsite backups of media and database files.

The tool set can be downloaded form the GitHub repo here.  There’s a simple howto guide at the bottom.  One slight bump we did run into is that some of the older versions struggle with larger file sizes so make sure you get version 1.5.0-alpha3 at the minimum.

To install the tool simply download the repo onto your server / laptop and cd into the directory to run s3cmd --configure. You’ll need to have generated iAM credentials through the AWS control panel first.  Once you’ve got it configured you can push files to a bucket with the following command:
s3cmd put /local/file/path s3://bucket-name

Below is a super simple (and fairly crude) bash script we call by cron every night that backups all db’s on a server and sends the backups to s3:

#!/bin/bash
echo "------- Starting " $(date) " -------"
rm -rf /backups/*.out
cd /backups/
mysqldump --all-databases -uroot -ppassword > $(date +%m%d%Y).out
cd /root/s3cmd-1.5.0-alpha3
./s3cmd put /backups/$(date +%m%d%Y).out s3://backups
echo "------- Finished " $(date) " -------"

It’s also good for backing up crucial log files in environments where a dedicate syslog server isn’t really justifiable or is perhaps a little too pricy.

Side note – you can also use this to push data into s3 that is to be served through cloudfront, making scripting media into a CDN simple.

Reseting broken website keys when importing a Magento Database

Occasionally, and for some reason I haven’t quite yet fathomed, when importing a Magento MySQL database the store, group and website id’s get a little out of kilter. This is normally presented by users not being able to log into the magento admin area. The below SQL snippet returns these to their default values.

SET FOREIGN_KEY_CHECKS=0;
UPDATE mage_core_store SET store_id = 0 WHERE code='admin';
UPDATE mage_core_store_group SET group_id = 0 WHERE name='Default';
UPDATE mage_core_website SET website_id = 0 WHERE code='admin';
UPDATE mage_customer_group SET customer_group_id = 0 WHERE customer_group_code='NOT LOGGED IN';
SET FOREIGN_KEY_CHECKS=1;

Recovering missing data from duplicate MySQL table.

As a web developer from time to time you inevitably have to cleans a database table of a few records – be they test data, corrupt data or whatever. Very occasionally (and rather unnervingly) this sometimes has to be done in a live environment on a busy table (sign up data etc etc). Standard practice is to take a backup of the table first. Inevitably sometimes it doesn’t quite go to plan, and a few days later you find there’s some data you need back.

The query bellow is a simple one that will copy all data from the backup table, that is not in the live table, back into the live table. This leaves any additional or modified data in the live table in tact. The only pre-requisite is that there is a unique ID column (in the query below ‘ID’) for which to reference against.

INSERT INTO livetable SELECT * FROM backupoflivetable WHERE ID NOT IN(SELECT ID FROM livetable)

Mass resetting of file permisions

An oldie but a goodie – Reset the file permissions on all folders and files through out a directory / site (755 for folders and 644 for files).  Great for quickly securing a site full of 777 folders.

find /folder -type d -exec chmod 0755 {} \;
find /folder -type f -exec chmod 0644 {} \;

Updating AWS DNS records from the CLI

One of the most useful features of AWS is the ability to do pretty much everything from the provided CLI tools.  Even more usefully they are actually pretty easy to use!

For a number of reasons (including automating deployments, updating records based on dynamic IP addresses and creating internal hostnames for instance deployments) I wanted to be able to push updates to DNS zones hosted on AWS Route53, and I wanted to be able to script the process.  Below is an example of how acheive these updates from the CLI (in this instance updating an existing host record).

Assumptions:  You have installed and configured the AWS cli tools & the credentials you are using have the permissions necessary to make updates.  If you need any pointers with this you can find AWS’s documentation here.

Step 1 – Get the hosted zone ID

When you push a DNS update to Route53 you need to pass in the ID of the hosted zone (a hosted zone normally being the domain name you wish to update).  This command will list all of the zones / domains currently hosted under your account:

aws route53 list-hosted-zones

returning an output along the lines of:

{
    "HostedZones": [
        {
            "ResourceRecordSetCount": 4,
            "CallerReference": "C510CAC3-D5D9-XXXX-B039-1DFA2XXXXXXX",
            "Config": {},
            "Id": "/hostedzone/Z1W9BXXXXXXXLB",
            "Name": "oliverhelm.me."
        }
    ],
    "IsTruncated": false,
    "MaxItems": "100"
}

“Id”: “/hostedzone/Z1W9BXXXXXXXLB” is the bit you’re looking for with everthing after ‘/hostedzone/’ being the ID (in this instance Z1W9BXXXXXXXLB).

Step 2 – Building the change file

The changes are requested by building out a JSON file which is then sent to AWS. The format of this file varies a little based on the type of record you wish to update (details of this can be found here). In this instance i’m updating the A record homerouter.oliverhelm.me with a new IP address.  Create a file (i’ve called it: change-resource-record-sets.json) and insert the below.

{
    "Comment": "Update record to reflect new IP address of home router",
    "Changes": [
        {
            "Action": "UPSERT",
            "ResourceRecordSet": {
                "Name": "homerouter.cunniffehelm.co.uk.",
                "Type": "A",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "4.4.4.4"
                    }
                ]
            }
        }
    ]
}

The action ‘UPSERT’ will update the record for homerouter.cunniffehelm.co.uk if it exists and it not will create it.

Hint: Using http://jsonlint.com/ to check the format of your JSON file saves a fair bit of faffing around.

Step 3 – Pushing the update to AWS

The update is then pushed to AWS with the following command, where ‘change-resource-record-sets.json’ is the name of the JSON file you saved above and ‘–hosted-zone-id’ is the ID you found in step one:

aws route53 change-resource-record-sets --hosted-zone-idZ1W9BXXXXXXXLB --change-batch file:///root/change-resource-record-sets.json

A JSON responce will be returned (making it easy to script the interaction) and should look something like the below.  In theory it might take a short while for the update to take effect but in my experience it seems to be pretty much instant.  The status should be ‘PENDING’ upon submission and will change to ‘INSYNC’ after the change has been applied.  Don’t forget though that you may not see the change reflected in DNS queries until the TTL has been reached.

{
    "ChangeInfo": {
       "Status": "PENDING",
       "Comment": "Update home IP Address",
       "SubmittedAt": "2015-08-16T11:54:24.907Z",
       "Id": "/change/C2JAIG0XXXXXXX"
    }
}

You can check the status of any submitted updates with the command:

aws route53 get-change --id C2JAIG0XXXXXXX

where ‘–id’ is the ID returned after the submission.  Further details on this here.

Copyright © 2016 Oliver Helm

Theme by Anders NorenUp ↑