There’s a great (and free) command line tool set called s3cmd that makes it simple to push and pull files to AWS S3 buckets, and is easy to script with.  We use it for cheep, reliable, offsite backups of media and database files. The tool set can be downloaded form the GitHub repo here.  There’s a simple howto guide at the bottom.  One slight bump we did run into is that some of the older versions struggle with larger file sizes so make sure you get version 1.5.0-alpha3 at the minimum. To install the tool simply download the repo onto your server / laptop and cd into the directory to run s3cmd –configure. You’ll need to have generated iAM credentials through the AWS control panel first.  Once you’ve got it configured you can push files to a bucket with the following command: s3cmd put /local/file/path s3://bucket-name Below is a super simple (and fairly crude) bash script we call by cron every night that backups all db’s on a server and sends the backups to s3: It’s also good for backing up crucial log files in environments where a dedicate syslog server isn’t really justifiable or is perhaps a little too pricy. Side note – you can also use this to push data into s3 that is to be served through cloudfront, making scripting media into a CDN simple.

As a web developer from time to time you inevitably have to cleans a database table of a few records – be they test data, corrupt data or whatever. Very occasionally (and rather unnervingly) this sometimes has to be done in a live environment on a busy table (sign up data etc etc). Standard practice is to take a backup of the table first. Inevitably sometimes it doesn’t quite go to plan, and a few days later you find there’s some data you need back. The query bellow is a simple one that will copy all data from the backup table, that is not in the live table, back into the live table. This leaves any additional or modified data in the live table in tact. The only pre-requisite is that there is a unique ID column (in the query below ‘ID’) for which to reference against.