Oliver Helm

LINX96 – Wireless ISP’s – are they viable? stable? scalable?

I spoke today at the LINX96 meeting about wireless ISPs, and if they are really viable, stable and scalable (I think they are).  In particular, I talked about some of the issued that WISPs face, as well as the solutions they provide and how the mantra ‘the future is full fibre and 5G’ stacks up against this.

There are some key issues that WISPs face, not least users perceptions of the bandwidth they require compared the actual bandwidth requirements seen on a network, and the viability of continuing to develop the UK’s XTTP network at low consumer pricing.  – An issue which is just as relevant to all ISPs building their own infrastructure.

Here’s a copy of my deck from today and another copy that’s perhaps a little more interesting with the notes included.

LINX96 – Oliver Helm, Sugarnet (with notes)

LINX96 – Oliver Helm, Sugarnet

Spectrum Licensing – The barrier to entry

Established by the Office of Communications Act in 2002 Ofcom is, among other things, responsible for issuing, policing and, importantly, costing spectrum space in the UK.   It is often accused of being inefficient and overly costly with accusations of a “top heavy salary bill” and “extravagant offices” being levied at it continually.  More importantly though, it can be argued that its inefficient and high-cost spectrum licensing is stifling competition, innovation and growth in the UK’s telecommunications market – something that with the impending cloud of Brexit and an increasingly skills-based economy is an even more concerning situation.

Ofcom reports that, for the year 2014/15, total spectrum management costs totaled just under £51.5 million, whilst fees totaled just over £267.9 million (Source: Ofcom: Spectrum management costs and fees 2014-15).  Removing the MoD’s contribution from this (as this is essentially just public money moving hands) Ofcom brought in a spectrum licensing surplus of a little over £61.5m.  Two questions arise from this:

Firstly, it is hard to see how the costs of managing such a fundamentally intangible asset can be so high.  Spectrum is, after all, not a physical asset that has to be maintained.  Applications for frequency use require a comparatively binary decision as to whether or not the requested assignment is (or should be) available within an area.  Applying for a licence, at the moment, however is not a straight-forward process.   Fixed wireless link licence details still largely have to be submitted in paper form (albeit though PDFs) and are manually processed by Ofcom’s spectrum licensing team.  This paper-pushing exercise is slow, ineffective and one directional, often incurring a high opportunity-cost loss to businesses.  A self-administered, truly digital, solution with instant licence approval and payment would be an obvious solution to this problem.

Secondly, these surplus generating licence fees are simply a tax on business, equivalent to the rates / “fibre tax” paid on traditional infrastructure.  As with any tax, this has two effects. In the first instance it pushes up the cost to the licesncees’ end users and in the second it reduces the capital companies have availed to spend on infrastructure improvements and R&D.  A fraction of the £199.6 million annual licence fees (source: Ofcom: Annual licence fees mobile spectrum) set for the mobile spectrum in 2015 would go a long way to improving coverage and stability in a now vital communications channel.  And that’s only the  It is a regularly voiced concern of the imbedded mobile networks that they are forced to focus on returning their investment in this extortionately priced band when they could be looking at pushing forwards the next generation of service delivery – namely 5G.  For the smaller companies, and in particular wireless ISPs, who are innovating in lower-cost delivery solutions, this tax adds substantial extra costs, which in a B2C environment, can be hard to justify.

It is therefore easy to argue that the high costs Ofcom places on spectrum licensing, combined with the bureaucratic costs associated with processing a licence, act to discourage effective competition and innovation in the market.  Access to the mobile operators’ spectrum space is cost prohibitive to anyone other than big business, and for WISPs looking to stabilise outside the self-coordinated licence bands the costs significantly hamper growth.  With the self-coordinated bands looking likely to become increasingly saturated (and in the case of the 5Ghz spectrum telcos still being the secondary user) the problem looks likely to increase further.  If the UK really wishes to invest in the digital economy and to truly connect 100% of the country to “Superfast Britain”, removing or reducing these barriers would be a good way to go.

It can also be argued that Ofcom’s focus is too heavily split.  The numerous amends, clarifications and additions to Ofcom’s responsibilities (the Communications Act 2003, the Wireless Telegraphy Act 2006, the Digital Economy Act 2010 and the Postal Services Act 2011) only serve to muddle the situation further.  Ofcom’s roll appears now to be more focused on reporting and mediation than strategic forethought and spectrum administration.  Is it any wonder that the costs of running the agency are so high?

Perhaps Ofcom is simply too large and dealing with too many areas. A remit ranging from infrastructure reporting to arbitrating and monitoring digital broadcast media and from handling domain name squabbles to oversight of the postal system is bound to spawn a large and bureaucratic machine.  Would an entirely separate smaller, lighter and above all more ‘digital’ agency not be more effective at both managing the current UK spectrum space and planning for its future?  David Cameron certainly though so in July 2009 when he pledged to remove the policy making function and return them to the Department for Culture Media and Sport. Perhaps it’s time to reverse the 2001 decision to form this “super-regulator”.

The Superfast rollout – are supersized grants really the best option?

Fast broadband access has long been regarded as the fourth utility and one that it is increasingly hard to live without.  For both professional and personal interactions we are increasing dependent upon it and with the governments musings around creating a legal right to fast broadband it is only going to become more of a ‘hot button’  and emotive topic.  But the solution perused at present – offering supersized BDUK grants to fund the enhancement and expansion of largely existing infrastructure networks (and largely granted to that of the incumbent BT), is not the solution.

Underlying this debate is the fact that large scale infrastructure upgrades are, and always will be expensive.  BT themselves said it would not be commercially viable to improve their network in some areas without the intervention of government support.  However by subsidising this through grants the true cost of this is masked and the free market, that should encourage and stimulate smaller, innovative firms to compete, is damaged.  Consumer broadband costs largely do not reflect the cost of creating this infrastructure and the ‘bundle sale’, whereby the cost of broadband is made negligible by grouping it in with other services such as calls, TV and line rental, acts only to muddle the position further.

Encouraging large scale, nationwide infrastructure is not always cheaper or more efficient either.  Under the present scheme millions of pounds have been allocated to BT / OpenReach to roll out a high speed network, yet the results have been variable and have by no means provided universal coverage in targeted areas.  Furthermore access to this infrastructure for competing enterprises is woefully limited and bureaucratic, stifling competition – an issue key to the debate on BT / OpenReach’s joint or separated future.  Whist it can be argued that these superseded infrastructure projects and matching grants are keeping prices down for consumers it can equally be argued that completion would have a similar effect.  Whilst grants may be necessary to make connecting some areas commercially viable, reducing the size of these and allowing firms to bit for smaller parts of them is likely to encourage the emergence of competitive, innovative and lower cost solitons.

Smaller networks are also often more efficient, more customer focused and more in tune with the genuine needs and requirements of the communities they serve.  Customer service for these smaller and micro providers is key and, as the BT rollout continues to storm on, a primary area by which they compete.  Where there is an alternative for customers they can, and do, demand a better all round service.  Smaller providers are often far better and more agile at delivering this.

The drive to focus on ultrafast (100Mbps to 300Mbps) is also questionable.  Much is made of this offering, yet recent statistics show take up of these services to be comparatively low (the official statistics can be found here http://media.ofcom.org.uk/facts/).  Ultrafast speeds may make infrastructure improvements look and sound impressive, but at present, are largely surplice to the requirements of the average consumer.  Highly reliability and low contention would be a better focus in the medium term.  Agile providers often will focus on this model but in a market where consumer education is heavily focused on the ‘faster equals better’ model  competing with these high budget marketing campaigns is a hard sell.

It is also easy to forget that a large number of people are still lacking even the basic usable broadband service (10 or 24Mbps) that the grants system was established to resolve.  The much discussed ‘remaining five percent’ is a big number (over a million house holds).  With the trend towards more remote working and the apparent overcrowding of many of our cities this problem is only going to become more apparent.  This five percent is also not only in the rural country side – pockets of London and other major cities suffer from  poor broadband speeds.  Supersized grants have so far failed to make a significant enough impact on these numbers.

Furthermore the scale and administration of these grants makes it harder still for competition in the market to become more prominent.  The size and clout of the incumbent makes them the obvious choice for government and local councils to turn too to meet their obligations in providing superfast connectivity, and deters them from looking to assess and trial alternative methods for the delivery.  Equally these smaller firms may lack the resources to build a pitch of the size and quality of the large providers.  For farmers, the very definition of the rural 5%, the NFU has long supported satellite as the solution as it is both fast, and low cost to install.  Perhaps therefore the answer lies not only in removing or further dividing the grants, but in putting more of the money directly into the hands of the consumer.  By doing so all providers would be forced to compete in a local market, helping to level the field.  Further roll out of the connection voucher scheme would quickly and efficiently encourage this.

Through the current grants provision the tax payer is primarily funding the long term network improvement of a privately owned company and in doing so is support the monopolisation of an increasingly key national infrastructure.  Providing high speed connectivity to all is an admirable idea but is not, through the current method, being delivered.

Adding swap space on an EC2 instance

When I first started using AWS ec2 instances to host small sites (where the database and web server where on the same box) i was often surprised by how often MySQL kept falling over – WordPress in particular would show the ‘Error establishing a database connection’ message frequently and the MySQL service would need a kick.  It was particularly odd given that almost identical boxes hosted on Rackspace where coping fine.  After a little rummaging around it became apparent that many EC2 instances don’t come with any swap space by default.

Swap space  (or a swap file) is a space on the hard drive that is used as an extension of the RAM assigned to the device.  Essentially when the device runs out of physical RAM it can use the swap space assigned as an extension / overflow.  It’s much slower than physical RAM so is only advisable as an overflow.

This means that when the memory reaches capacity, there’s no where to go, and MySQL don’t like that.  The solution is simply to add swap space.   On the default AWS AMI (CentOS) you add swap space as follows (as root or sudo):

/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1

Where 1024 assigns 1GB of swap space.  Increase that as you please.

To make sure this is maintained on reboot too add the following line to your fstab (/etc/fstab):
/var/swap.1 swap swap defaults 0 0

You can check this has worked using the ‘free’ command which should now list swap space below the physical memory
free -m

Many thanks to all the Stack Overflow posters who jogged my memory on the syntax of this!

Compiling PHP with SYBASE support

Over at The Constant Media we’ve been working on a project that relies on using PHP to connect to a customers SYBASE database. Essentially they have a proprietary system in place that manages many key aspects of their business and their website, as a key sales channel, needs to interact with it.

If you are pretty confident with linux and don't want to work your way though this there is a command list gist here.

SYBASE is not something we had come across before, and whilst the actual PHP code it’s self isn’t all that tricky to implement, getting SYBASE and it’s dependencies compiled into PHP can be a little fiddly. Below is a step-by-step of how we did this. It’s based around CentOS (our server OS of choice) but should be fairly transferable.

Before I go to far into this i should point out that much of the credit due goes to @andrew_heron, my business partner at The Constant Media, who did much of the hard work and leg work on this project and the initial server builds. I really only got involved when we came to build the production and staging servers and made the server a little more production ready.

The first thing to point out is that to do this you need to compile PHP from the source code and cannot (to my knowledge) do this though yum. We are running this on a CentOS 6 box which was vanilla at the point of install. If you have PHP installed at the moment (though rpm’s or yum) scrub it off before wading in.

Frist, lets update the box and install a few pre-requisites. Because we are compiling from source we are going to need a few dev libraries too (devels). As root or sudo if you prefer:

$ yum install gcc openssl-devel pcre-devel libxml2-devel httpd httpd-devel mod_ssl libcurl-devel libpng-devel
$ yum groupinstall "Development Tools"

Installing FreeTDS
SYBASE has a dependency on FreeRDS to so we need to start by downloading, configuring and installing this. All source files are going to be stored in /usr/src throughout.

Note: Some of these version may be out of date when you get to it so you may need to adapt things a little to accommodate this.

Download and unpack FreeTDS:
$ cd /usr/src
$ wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-patched.tar.gz
$ tar -zxvf freetds-*
$ rm -rf freetds-*.tar.gz
$ cd freetds-*

Next we configure FreeTDS, specifying the path where we whish to install it. Once configured we can ‘make’ and install the package. Watch for errors at each stage and rectify any that occur before proceeding.

$ ./configure --prefix=/usr/local/freetds
$ make
$ make install

Installing APR & APR-Util
The next dependencies needed are APR and APR-Util.

APR
$ cd /usr/src
$ wget http://apache.mirror.anlx.net/apr/apr-1.5.2.tar.gz
$ tar -zxvf apr-*
$ rm -rf apr-*.tar.gz
$ cd apr-*

Again configure and install, making sure to fix any errors as you go:
$ ./configure
$ make
$ make install

APR-Util
$ cd /usr/src
$ wget http://apache.mirror.anlx.net//apr/apr-util-1.5.4.tar.gz
$ tar -zxvf apr-util-1.5.4.tar.gz
$ rm -rf apr-util-1.5.4.tar.gz
$ cd apr-util-1.5.4/

Configure and install.
$ ./configure --with-apr=/usr/local/apr
$ make
$ make install

If you have any other dependencies outside the norm then now’s the time to go install them.

Downloading and installing PHP
We’ve chosen to run with PHP 5.6 as we feel it will be secure and stable enough in our environment. If you need an earlier or later version the principles should be the same, just insert it into the wget below.

$ cd /usr/src
$ wget http://uk1.php.net/get/php-5.6.8.tar.gz
$ mv mirror php-5.6.8.tar.gz
$ tar -zxvf php-5.6.8.tar.gz
$ rm -rf php-5.6.8.tar.gz
$ cd php-5.6.8

The next command is the most important, and most problematic stage of the install. We have chosen some pretty standard includes (php-mysql, php-mysqli, php-mbstring, php-pdo, php-gd and php-curl) but if you need more just add them in.

For the SYSBASE extension to function you need to specify the path to your FreeTDS install. This is the path you specified above in the –prefix flag. As the Apache install is pretty standard we’ve chosen to install this via yum (to keep things simple). For this to work though you need to make sure that httpd-devel is installed (the httpd dev library installed above) and know the path to apxs which comes as part of that library. By standard apxs is located at /usr/sbin/apxs but if by chance it’s not there you should be able to find it with a find ‘find / -name ‘apxs”. If that doesn’t return the path then you probably haven’t installed httpd-devel correctly.

The last peramiter we’ve added is –with-config-file-path. This is where your php.ini file will live – we’ve specified this as /etc/php.ini as it’s a fairly standard and thus makes it easier for others to find.

Once you have adjusted this command to your needs keep a record of it somewhere safe as you will need to re-run it should you ever need to add more modules or upgrade PHP.

$ ./configure --with-sybase_ct=/usr/local/freetds --with-apxs2=/usr/sbin/apxs --with-mysql --with-mysqli --enable-mbstring --with-pdo-mysql --with-openssl --with-curl --with-gd --with-config-file-path=/etc/php.ini

Assuming no errors where returned by the command above you can now make and install PHP:
$ make
$ make test
$ make install

If you need a template php.ini you can normally find one in/usr/src/php-yourversion/php.ini-development. Failing that just grab one off the web.

PHP should now be installed and working. Running php -v will confirm this for you and you can check the version installed matches your expectations.

Linking PHP & Apache
Lastly, you’ll want to let apache know about your PHP install. To do this just add the following into /etc/httpd/conf/httpd.conf

#
#   Enable PHP
#
AddType  application/x-httpd-php         .php

It’s also worth adding index.php to the list of accepted index files.

Restart httpd and you should be good to go. Place a phpinfo() file on your server, scan down the list and you should see sybase listed.

$ service httpd start

Adding additional Elastic IP’s to a single EC2 instance

When hosting websites on EC2 instances it’s pretty common to need to point multiple Elastic IPs to a single EC2 instance, normally to allow the use of multiple SSL certificates.  This is pretty easy to do, but a little confusing at fist if your not used to the sysadmin world.

It’s important to understand that each NIC (network interface) can only have a single elastic IP address bound to it.  Most instances are not launched with spare NICs attached and as such you will have to create an attach and additional NIC to which you can associate (point) the additional elastic IP.

Note: The number of NIC's you can attached to an EC2 instance is limited by the size of the instance.  For example a micro instance can at the point of writing only support two NICs (there for limiting you to using only two elastic IP's).  You can get around this by using a load balancer.

Creating & attaching an additional Network Interface

  1. First log into your AWS account and pull up the EC2 Dashboard.
  2. From there select ‘Network Interfaces’ under Network & Security tab on the left hand menu and click ‘Create Network Interface’ (the big blue button at the top).
  3. A pop up will appear and you can name the new interface something meaningful to you.  Then add it to the subnet that the EC2 server is currently in.  (If your not sure which subnet this is you can find it by looking at the instance details on the ‘Instances’ page).
  4. Once you have selected a subnet the security groups available on that subnet will be listed.  Select the groups to all through the traffic you need (you can always add more / change this later if you need too).
  5. If you want to manually assign the private IP address you can do so at this stage, but I tend to leave it blank which will auto assign an address for you out of the VPC’s range.

Once you have created the NIC pull up its details from the list and make a note of the Primary Private IP and the Network Interface ID. The Primary private IP is where your EC2 instance will see the traffic as originating from.  If you need to set up SSL certificates for example it is this private IP that you will listen for / specify in the config file, not the elastic IP address.

Next you need to attached this new NIC to your EC2 instance. To do this select it from the list and chose the ‘Attach’ button at the top of the page. Select the instance you want to attache the NIC to from the dropdown list and click ‘Attach’. At this point the NIC will be attached to the instance and be ready to receive / send traffic.

Associating an Elastic IP

You can now head over to the Elastic IPs section (on the left nav). If you have a spare IP listed you can use this or alternatively you can click ‘Allocate New Address’ to create an additional one.  Select the elastic IP from the list and using the Network Interface ID you noted earlier (when you created the NIC) find the interface in the network interface field and hit ‘Associate’.

Your done! The elastic IP will now pass traffic to your instance, and the instance will identify this traffic as coming from the private IP you noted earlier.

 

Note: Whilst the number of NIC's you can use are limited by the instance's size, Elastic IPs are limited per account (by default to five). This limit can be increased by raising a support ticket so long as you can justify the need.

Backing up to Amazon S3

There’s a great (and free) command line tool set called s3cmd that makes it simple to push and pull files to AWS S3 buckets, and is easy to script with.  We use it for cheep, reliable, offsite backups of media and database files.

The tool set can be downloaded form the GitHub repo here.  There’s a simple howto guide at the bottom.  One slight bump we did run into is that some of the older versions struggle with larger file sizes so make sure you get version 1.5.0-alpha3 at the minimum.

To install the tool simply download the repo onto your server / laptop and cd into the directory to run s3cmd --configure. You’ll need to have generated iAM credentials through the AWS control panel first.  Once you’ve got it configured you can push files to a bucket with the following command:
s3cmd put /local/file/path s3://bucket-name

Below is a super simple (and fairly crude) bash script we call by cron every night that backups all db’s on a server and sends the backups to s3:

#!/bin/bash
echo "------- Starting " $(date) " -------"
rm -rf /backups/*.out
cd /backups/
mysqldump --all-databases -uroot -ppassword > $(date +%m%d%Y).out
cd /root/s3cmd-1.5.0-alpha3
./s3cmd put /backups/$(date +%m%d%Y).out s3://backups
echo "------- Finished " $(date) " -------"

It’s also good for backing up crucial log files in environments where a dedicate syslog server isn’t really justifiable or is perhaps a little too pricy.

Side note – you can also use this to push data into s3 that is to be served through cloudfront, making scripting media into a CDN simple.

Recovering missing data from duplicate MySQL table.

As a web developer from time to time you inevitably have to cleans a database table of a few records – be they test data, corrupt data or whatever. Very occasionally (and rather unnervingly) this sometimes has to be done in a live environment on a busy table (sign up data etc etc). Standard practice is to take a backup of the table first. Inevitably sometimes it doesn’t quite go to plan, and a few days later you find there’s some data you need back.

The query bellow is a simple one that will copy all data from the backup table, that is not in the live table, back into the live table. This leaves any additional or modified data in the live table in tact. The only pre-requisite is that there is a unique ID column (in the query below ‘ID’) for which to reference against.

INSERT INTO livetable SELECT * FROM backupoflivetable WHERE ID NOT IN(SELECT ID FROM livetable)

Mass resetting of file permisions

An oldie but a goodie – Reset the file permissions on all folders and files through out a directory / site (755 for folders and 644 for files).  Great for quickly securing a site full of 777 folders.

find /folder -type d -exec chmod 0755 {} \;
find /folder -type f -exec chmod 0644 {} \;

Updating AWS DNS records from the CLI

One of the most useful features of AWS is the ability to do pretty much everything from the provided CLI tools.  Even more usefully they are actually pretty easy to use!

For a number of reasons (including automating deployments, updating records based on dynamic IP addresses and creating internal hostnames for instance deployments) I wanted to be able to push updates to DNS zones hosted on AWS Route53, and I wanted to be able to script the process.  Below is an example of how acheive these updates from the CLI (in this instance updating an existing host record).

Assumptions:  You have installed and configured the AWS cli tools & the credentials you are using have the permissions necessary to make updates.  If you need any pointers with this you can find AWS’s documentation here.

Step 1 – Get the hosted zone ID

When you push a DNS update to Route53 you need to pass in the ID of the hosted zone (a hosted zone normally being the domain name you wish to update).  This command will list all of the zones / domains currently hosted under your account:

aws route53 list-hosted-zones

returning an output along the lines of:

{
    "HostedZones": [
        {
            "ResourceRecordSetCount": 4,
            "CallerReference": "C510CAC3-D5D9-XXXX-B039-1DFA2XXXXXXX",
            "Config": {},
            "Id": "/hostedzone/Z1W9BXXXXXXXLB",
            "Name": "oliverhelm.me."
        }
    ],
    "IsTruncated": false,
    "MaxItems": "100"
}

“Id”: “/hostedzone/Z1W9BXXXXXXXLB” is the bit you’re looking for with everthing after ‘/hostedzone/’ being the ID (in this instance Z1W9BXXXXXXXLB).

Step 2 – Building the change file

The changes are requested by building out a JSON file which is then sent to AWS. The format of this file varies a little based on the type of record you wish to update (details of this can be found here). In this instance i’m updating the A record homerouter.oliverhelm.me with a new IP address.  Create a file (i’ve called it: change-resource-record-sets.json) and insert the below.

{
    "Comment": "Update record to reflect new IP address of home router",
    "Changes": [
        {
            "Action": "UPSERT",
            "ResourceRecordSet": {
                "Name": "homerouter.cunniffehelm.co.uk.",
                "Type": "A",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "4.4.4.4"
                    }
                ]
            }
        }
    ]
}

The action ‘UPSERT’ will update the record for homerouter.cunniffehelm.co.uk if it exists and it not will create it.

Hint: Using http://jsonlint.com/ to check the format of your JSON file saves a fair bit of faffing around.

Step 3 – Pushing the update to AWS

The update is then pushed to AWS with the following command, where ‘change-resource-record-sets.json’ is the name of the JSON file you saved above and ‘–hosted-zone-id’ is the ID you found in step one:

aws route53 change-resource-record-sets --hosted-zone-id Z1W9BXXXXXXXLB --change-batch file:///root/change-resource-record-sets.json

A JSON responce will be returned (making it easy to script the interaction) and should look something like the below.  In theory it might take a short while for the update to take effect but in my experience it seems to be pretty much instant.  The status should be ‘PENDING’ upon submission and will change to ‘INSYNC’ after the change has been applied.  Don’t forget though that you may not see the change reflected in DNS queries until the TTL has been reached.

{
    "ChangeInfo": {
       "Status": "PENDING",
       "Comment": "Update home IP Address",
       "SubmittedAt": "2015-08-16T11:54:24.907Z",
       "Id": "/change/C2JAIG0XXXXXXX"
    }
}

You can check the status of any submitted updates with the command:

aws route53 get-change --id C2JAIG0XXXXXXX

where ‘–id’ is the ID returned after the submission.  Further details on this here.

Copyright © 2018 Oliver Helm

Theme by Anders NorenUp ↑