When hosting websites on EC2 instances it’s pretty common to need to point multiple Elastic IPs to a single EC2 instance, normally to allow the use of multiple SSL certificates. This is pretty easy to do, but a little confusing at fist if your not used to the sysadmin world. It’s important to understand that each NIC (network interface) can only have a single elastic IP address bound to it. Most instances are not launched with spare NICs attached and as such you will have to create an attach and additional NIC to which you can associate (point) the additional elastic IP. Note: The number of NIC’s you can attached to an EC2 instance is limited by the size of the instance. For example a micro instance can at the point of writing only support two NICs (there for limiting you to using only two elastic IP’s). You can get around this by using a load balancer. Creating & attaching an additional Network Interface First log into your AWS account and pull up the EC2 Dashboard. From there select ‘Network Interfaces’ under Network & Security tab on the left hand menu and click ‘Create Network Interface’ (the big blue button at the top). A pop up will appear and you can name the new interface something meaningful to you. Then add it to the subnet that the EC2 server is currently in. (If your not sure which subnet this is you can find it by looking at the instance details on the ‘Instances’ page). Once you have selected a subnet the security groups available on that subnet will be listed. Select the groups to all through the traffic you need (you can always add more / change this later if you need too). If you want to manually assign the private IP address you can do so at this stage, but I tend to leave it blank which will auto assign an address for you…
There’s a great (and free) command line tool set called s3cmd that makes it simple to push and pull files to AWS S3 buckets, and is easy to script with. We use it for cheep, reliable, offsite backups of media and database files. The tool set can be downloaded form the GitHub repo here. There’s a simple howto guide at the bottom. One slight bump we did run into is that some of the older versions struggle with larger file sizes so make sure you get version 1.5.0-alpha3 at the minimum. To install the tool simply download the repo onto your server / laptop and cd into the directory to run s3cmd –configure. You’ll need to have generated iAM credentials through the AWS control panel first. Once you’ve got it configured you can push files to a bucket with the following command: s3cmd put /local/file/path s3://bucket-name Below is a super simple (and fairly crude) bash script we call by cron every night that backups all db’s on a server and sends the backups to s3: It’s also good for backing up crucial log files in environments where a dedicate syslog server isn’t really justifiable or is perhaps a little too pricy. Side note – you can also use this to push data into s3 that is to be served through cloudfront, making scripting media into a CDN simple.
One of the most useful features of AWS is the ability to do pretty much everything from the provided CLI tools. Even more usefully they are actually pretty easy to use! For a number of reasons (including automating deployments, updating records based on dynamic IP addresses and creating internal hostnames for instance deployments) I wanted to be able to push updates to DNS zones hosted on AWS Route53, and I wanted to be able to script the process. Below is an example of how acheive these updates from the CLI (in this instance updating an existing host record). Assumptions: You have installed and configured the AWS cli tools & the credentials you are using have the permissions necessary to make updates. If you need any pointers with this you can find AWS’s documentation here. Step 1 – Get the hosted zone ID When you push a DNS update to Route53 you need to pass in the ID of the hosted zone (a hosted zone normally being the domain name you wish to update). This command will list all of the zones / domains currently hosted under your account: returning an output along the lines of: “Id”: “/hostedzone/Z1W9BXXXXXXXLB” is the bit you’re looking for with everthing after ‘/hostedzone/’ being the ID (in this instance Z1W9BXXXXXXXLB). Step 2 – Building the change file The changes are requested by building out a JSON file which is then sent to AWS. The format of this file varies a little based on the type of record you wish to update (details of this can be found here). In this instance i’m updating the A record homerouter.oliverhelm.me with a new IP address. Create a file (i’ve called it: change-resource-record-sets.json) and insert the below. The action ‘UPSERT’ will update the record for homerouter.cunniffehelm.co.uk if it exists and it not will create it. Hint: Using http://jsonlint.com/ to check the format of your JSON file saves a fair bit of faffing around. Step…