An actively adopted wholesale market, where providers have clear standards and methodologies to conform to, will deliver 5G networks faster, at a lower cost, and in ways that enable competition.

Photo by Christopher Burns on Unsplash

5G Networks are rolling out.  They offer unparalleled opportunities to enhance the connectivity we already rely upon (more so in the wake of COVID-19 than ever before) and to open new avenues in healthcare, distance working, AgriTech and a myriad of as of yet unconsidered opportunities. Reports from 2017 suggest that just a 10% increase in mobile broadband penetration could increase UK GDP by 0.6% – 2.8% and a report from O2 suggest that 5G infrastructure will contribute £7 billion a year in direct measures alone.  The upside potential for UK PLCS is vast, but there are significant barriers to the deployment of 5G networks outside the super urban areas that make the business case harder. 

Underlying these problems is the high frequency required for 5G throughput (at least in the mobile, not Fixed Wireless Access deployments most people understand 5G to be).  These high-frequencies have short reach and poor penetration to man-made objects like buildings, and so the density of cell and micro-cell sites will be far greater than that we’ve seen before.  They will be dependant on ‘deep’ fibre connectivity. 

To deliver on coverage promises and objectives the deployment of 5G has five primary hurdles it will have to overcome:

  • Connectivity between the millions of new cell sites needed;
  • Ultra-high-resolution mapping data and sophisticated planning tools;
  • Gaining secured access to sites (wayleaves);
  • Spectrum; and
  • Deployment capital.

All of the above are capital intensive.  Outside of the super urban areas, where population and demand density (alongside spectrum challenges) skew the economics, the cost of every provider building it’s own 5G network will be unnecessarily high, and in the more rural areas prohibitively so.  Not only will this delay delivery to areas that could benefit from the technology the most, but the extreme capital demands will eliminate the opportunity for further competition in the market space.  Network sharing agreements among the current MNOs will help, but they will lack the agility needed to deliver this new type of network.

Technology, however, is not a barrier.  5G sites are being actively deployed and tested with vigour across the world and solutions are readily available to the market now.

With the exception of the mapping and planning solutions (of which a myriad of impressive options are entering the market place), fibre network providers are well placed to address these problems.  With significant capital being committed in the mainstream (Openreach and Liberty Global) and Altnet fibre space, and a real need to increase the shorter term demand side for fibre products to keep driving build, fibre network builders have both the appetite and ability to fuel faster 5G deployment.

Breaking the integrated convention

The truly integrated MNO convention is already dropping away as capacity demand outstrips the networks own supply abilities.  The underlying connectivity mobile networks rely on is already largely provided by the wholesale market, as the recent Virgin Media / Three UK announcement highlights.

Already we are seeing neutral-host capable networks emerging as trials with the support of DCMS’s “5G Testbeds & Trials Program”.  But the build quality of these need to improve for the MNOs to be able to rely on them.  Coverage and service levels are key in this market.  The reliance on FWA must drop and the ability to scale these networks to a higher coverage level must grow.  But it is a good base from which to develop and one that is within the reach of the more established fibre network builders.

Enabling Competition.

Completion in the broadband market is high (at least at a service provider level) because the barriers to entry are low.  New ‘local’ networks with geographical boundaries pop up every month.  There are a plethora of points to interconnect and even more middle-men to help do so.  But in the mobile market, this isn’t the case. 

There are a few ‘virtual’ networks (giffgaff, Tesco Mobile and Virgin Media) but these require significant volume and large capital commitments to get off the ground.  By building carrier-neutral networks, and allowing smaller operators to run them, the market place will open up.  These smaller providers will be agile, hungry to steal market share and innovative in models allowing new providers to do so. They will be better placed to enable the edge computing ultra-low latency connectivity allows.

This does not need to be a single shared network (Australia’s poor state broadband network is testimony as to why this option might not work).  An actively adopted wholesale market, where providers have clear standards and methodologies to conform to, will deliver 5G networks faster, at a lower cost and in a way that enables competition.

By facilitating these smaller neutral-host networks and reducing the reliance on network sharing by the established players you will prevent the slow and underfunded deployments we’ve seen in 4G and even 2G networks to date.  There will still be the need for these deployments, and likely with state funding, in the very rural areas but the need will be less so and the competition will cause the MNOs to sharpen their game.

Where is the regulator?

To deliver on any of this we need a strong regulator.  Ofcom needs to become the driving force behind these concepts.  It needs to stop focusing on a small selection of high-level metrics and engage with the larger market to drive long-term change (to the benefit of the consumer).  They need to help the market agree and define standards, SLAs and access agreements that the current MNOs can get behind.

A market of four is hardly competitive when compared to the fixed broadband space.  The four need to be compelled to deliver faster or adopt these methods.  Spectrum licensing needs to be revised and must move away from the draconian, revenue-raising approach, currently adopted.  The upside impact for UK PLCS of doing so would be enormous.

Fear and apathy are the key system 1 responses that we need to overcome. We have to de-risk the process for consumers and in particular, overcome the preconceptions that System 1 decisions are built on.

Photo by Shahadat Rahman on Unsplash

Whether a conscious decision or not, marking tends to focus heavily on System 1 – trying to make people subconsciously elect to engage with and buy/consume a product either (almost) instantaneously or at a friction point.  As such, marketing teams focus on continued exposure to the brand, and too often in this market, on the price above all else.  But broadband truly is a transformative product.  It has a potentially enormous, positive, effect on peoples lives.  Both at work and at play it can make communication, collaboration and interaction with our surroundings effortless, seamless and instantaneous.  The kicker for full fibre broadband is that most people don’t need it right now.

System 1

Consumers don’t tend to think about broadband very often, and when they do it tends to be because they have encountered a friction point and are thus not in a positive mindset.  They think through fast heuristic connections that don’t involve checks for accuracy and are often closer to intuition.  It is a mental-set trap –  a state of readiness to see certain things rather consider others.

Typically when it comes to considering making a change (and linked, the option of changing provider – the goal we are trying to achieve) the thought process often follows the same question “Is it really worth the effort of changing anything?”.  The answer tends to follow one of three paths:

  • “No, I’ve tried changing in the past and it makes no difference.”;
  • “Your provider says they are ‘Superfast’ and show a bigger number for download speeds”; and, the biggest and most common barrier
  • “Changing is too risky and complicated”

And inevitably that leads to a preference for staying with the status quo.

Points one and two above overlap and it is only through engaging System 2 that we will have the opportunity to educate the consumer and facilitate customer onboarding, driving past the 30% take-up sticking point.  The challenge in doing so lies with the fact that System 2 thinking requires a lot more cognitive effort and energy.  It is less impulsive, more cautious and involves ‘reasoning the solution out’.

What effects me now?

As humans, our shortest and the most embedded neural path is always the question “What effects me now?”.  This runs everything from the base level of providing for our survival needs to the most complex decisions about our future, and our brain assigns a significant weighting to this question. Therein lies the problem – for most people at the moment, superfast (DSL) is frankly enough – for right now.  The status-quo of < 80Mbps might not be sufficient for long, but the intrinsically human preference to think about the now, not the future, leads consumers to resist (de-prioritise) change in the short term in favour of energy (and time) conservation.  Realistically as fibre builders (excluding the rural specialists and a few small urban pockets), we are largely trying to sell for future needs but the fear, knowledge gap and perceived risks of changing broadband provider now almost always outweighs the longer-term pros in System 1 thinking.

Fear, Apathy and Problem Solving Traps

Fear and apathy are the key system 1 responses that we need to overcome.  We have to de-risk the process for consumers and in particular, overcome the preconceptions that System 1’s decisions are built on – those historically build: “Last time I changed something it took months to get it working again”; “They’ll have to dig up my drive and will make a mess”; “computers / IT are always complicated and never work”; “I’ll have to take time off work and they’ll probably miss the appointment like last time” (the list goes on) and those built through social influence. 

There is also a tendency for consumers to fall into problem-solving ‘traps’ when meeting a friction point.  Consumers have a mental-set about what the problem is likely to be, or how to solve it, and think less laterally about the problem.  “I need to turn it off and on again”, “My provider must have a problem, I’ll call them” rather than “Let’s look for a compliantly new / better solution”.  These too are often based on the historical preconceptions and they automatically solve these problems with System 1 thinking.  This is normal – it is automatic and involves no (or very little) cognitive effort on their part.

There is a lot of “never have to think about your broadband again” style marketing out there, but this misses the point.  Consumers are not really ‘thinking’, (at least not in a cognitive sense), about it at the moment, and it is our job to facilitate them to do so.

Leveraging Social Influence

The role of social influence on preconceptions provides the greatest opportunity and the greatest risk to our marketing teams.  Humans are hugely susceptible to social influence but are also by default risk-averse.  We are more prone to sharing negative experiences with brands than the positives experiences and how others perceive us is paramount to our social self-placement.  But because of this, we will also take recommendations from those within (or similar to) our social networks as highly creditable and less in need of validation. 

It is for this reason that perceived un-biasable reviews on trust pilot are so valuable.  Word of mouth is the holy grail of all marketing strategies but does not naturally lend it’s self to scaling at speed, precisely because it requires consumers to engage System 2.

The risk here is that infrastructure is hard – guaranteed timelines are not always available and it is easy to end up in a loose-loose position with the consumer.  If you push delivery out it is perceived as too much hassle to consider.  If you bring timelines in and fail to deliver (even when due to reasonable events such as blockages) the consumer automatically feels they have wasted time and energy. 

The Solution

The solution to all three of the System 1 responses above, and to driving engagement without weighting for friction points, is to interrupt System 1 and engage System 2.  By doing so we open up the opportunity to educate consumers about the benefits full fibre broadband offers and to offset the effort (and time) the consumer will have to allocate against the advantages and future time savings. 

To be allowed this opportunity and to persuade the consumer to spend their energy on our offering we have to provide both a hook and tangible, short term, incentive to do so. 

The temptation here is to use price as the leaver allowing another system 1 choice: “They are cheaper so I’ll go with them”, but in so doing we fuel a longer-term, race-to-the-bottom, issue that is tricky (and costly) to reverse.  Instead, it is better to move away from heavy reliance on traditional marketing methods – catchy lines or images that if considered at all, make a poor attempt to engage an emotion to illicit an instant response or build up a generic brand image.  Brand image is vital, as is the persistence of association/identity, but it is only half of the equation.

The marketing focus for network and service providers, therefore, needs to shift to engaging System 2 and educating consumers.  We must identify the key concerns and preconceptions of demographic segments and explain both how and why fibre can improve their lives.

It is only through disengaging System 1 and engaging System 2 that we can overcome the preconceptions and mental mind-set that the consumer has, and relay the truly transformative advantages that fibre optic broadband can offer.  It will shift consumers from a heuristic ‘what effects me now’ mindset to a reasoned decision about the best long term option.  It is the key to breaking down the fear barrier and moving past the 15% of early adopters and the 15% of ‘switchers’ and gain higher penetration in the shorter term, fuelling further growth, unlocking and de-risking capital and ultimately as stakeholders retaining greater control (ownership) of our business.

Whilst it is a commonly accepted economic principle that in a competitive market prices will drift towards marginal cost, the propensity of the UK consumer not to switch has driven this cost down - and that will change.

The UK is not a ‘switching’ market. On average only around 15% of consumers will switch broadband provider year on year, and it is this security that allows both the service providers (ISPs) and the infrastructure builders to drive prices down. Service Providers rely on the fact that a customer will be likely be with them for a period significantly longer than the initial contract term and the acceptance that a customer will be lost within their contract term is mitigated by this low switch rate. So whilst it is a commonly accepted economic principle that in a competitive market prices will drift towards marginal cost, the propensity of the UK consumer not to switch has driven this cost down – and that will change.

The price the consumer pays is made up of a basket of items: the cost of acquiring the customer, the tech needed to deliver the service (and increasingly more of that is needed at customer premises as ubiquitous WiFi coverage is expected), estimated support and operating costs for the customer’s lifetime and both the wholesale buy price and technology to integrate with it. All of these areas look likely to be hit. In particular, the more competitive the market place the higher the cost of acquisition will be as providers have to compete harder to stand out and differentiate themselves on something other than just price.

On the infrastructure layer competition may have an even greater inflationary effect which will be reflected in wholesale prices. Whilst interest rates are low it is possible for companies and the investors bankrolling them to accept a slower return, and a lower IRR based on lower weighted costs of capital, but this only holds true when relatively safe, and high levels of take up can be expected over the medium term. If the government elect not to extend the fibre rates relief (or preferably remove fibre tax all together) this problem could be exacerbated, particularly in rural areas where small cluster network segments are likely to become economically unviable (granted there may be little to no infrastructure competition in these areas). The market is already seeing scarcity of resource (the labour force) inflating build costs and whilst this effect can be partly mitigated though innovative deployment techniques, competition R&D is creating (micro trenching, micro ducts etc) there is only so far this can go – particularly if changes to street work rules are not forth coming.

It seems inevitable that even with a monopoly across much of the country, switching levels will grow. The European Electronic Communications Code will significantly advance this and as each generation gets more tech-native, the fear of technological change that currently keeps switching levels low will soon die away. The increasing proliferation of mobile first will de-risk switching further in the consumer’s mind.

In ultra-dense urban areas (OfCom’s area A) high levels of competition at the infrastructure layer might be sustainable at current price points, but as you move into area B and away from perpetual growth muddying the waters, viability remains to be seen. Whilst the FTIR has many admirable aims perhaps the competition focus is not such a good one for the consumer.

Inefficient and high-cost spectrum licensing is stifling competition, innovation and growth in the UK's telecommunications market

Image by James Wainscoat, Unsplash

Image by James Wainscoat, Unsplash

Established by the Office of Communications Act in 2002 Ofcom is, among other things, responsible for issuing, policing and importantly, costing spectrum space in the UK.  It is often accused of being inefficient and overly costly with accusations of a “top heavy salary bill” and “extravagant offices” being levied at it continuously.  More importantly though, it can be argued that it is inefficient and high-cost spectrum licensing is stifling competition, innovation and growth in the UK’s telecommunications market – something that with the impending cloud of Brexit and an increasingly skills-based economy is an even more concerning situation.

Ofcom reports that, for the year 2014/15, total spectrum management costs totalled just under £51.5 million, whilst fees totalled just over £267.9 million (Source: Ofcom: Spectrum management costs and fees 2014-15).  Removing the MoD’s contribution from this (as this is essentially just public money moving hands) Ofcom brought in a spectrum licensing surplus of a little over £61.5m.  Two questions arise from this:

Firstly, it is hard to see how the costs of managing such a fundamentally intangible asset can be so high.  Spectrum is, after all, not a physical asset that has to be maintained.  Applications for frequency use require a comparatively binary decision as to whether or not the requested assignment is (or should be) available within an area.  Applying for a licence at the moment, however is not a straight-forward process.   Fixed wireless link licence details still largely have to be submitted in paper form (albeit though PDFs) and are manually processed by Ofcom’s spectrum licensing team.  This paper-pushing exercise is slow, ineffective and one directional, often incurring a high opportunity-cost loss to businesses.  A self-administered, truly digital, solution with instant licence approval and payment would be an obvious solution to this problem.

Secondly, these surplus generating licence fees are simply a tax on business, equivalent to the rates / “fibre tax” paid on traditional infrastructure.  As with any tax, this has two effects. In the first instance it pushes up the cost to the licensees end users and in the second it reduces the capital that companies have availed to spend on infrastructure improvements and R&D.  A fraction of the £199.6 million annual licence fees (source: Ofcom: Annual licence fees mobile spectrum) set for the mobile spectrum in 2015 would go a long way to improving coverage and stability in a now vital communications channel.  And that’s only the start, it is a regularly voiced concern of the imbedded mobile networks that they are forced to focus on returning their investment in this extortionately priced band when they could be looking at pushing forwards the next generation of service delivery – namely 5G.  For the smaller companies, and in particular wireless ISPs who are innovating in lower-cost delivery solutions, this tax adds substantial extra costs, which in a B2C environment can be hard to justify.

It is therefore easy to argue that the high costs Ofcom places on spectrum licensing, combined with the bureaucratic costs associated with processing a licence, act to discourage effective competition and innovation in the market.  Access to the mobile operators’ spectrum space is cost prohibitive to anyone other than big business, and for WISPs looking to stabilise outside the self-coordinated licence bands, the costs significantly hamper growth.  With the self-coordinated bands looking likely to become increasingly saturated (and in the case of the 5Ghz spectrum telcos still being the secondary user) the problem looks likely to increase further.  If the UK really wishes to invest in the digital economy and to truly connect 100% of the country to “Superfast Britain”, removing or reducing these barriers would be a good way to go.

It can also be argued that Ofcom’s focus is too heavily split.  The numerous amends, clarifications and additions to Ofcom’s responsibilities (the Communications Act 2003, the Wireless Telegraphy Act 2006, the Digital Economy Act 2010 and the Postal Services Act 2011) only serve to muddle the situation further.  Ofcom’s roll appears now to be more focused on reporting and mediation than strategic forethought and spectrum administration.  Is it any wonder that the costs of running the agency are so high?

Perhaps Ofcom is simply too large and dealing with too many areas. A remit ranging from infrastructure reporting to arbitrating and monitoring digital broadcast media and from handling domain name squabbles to oversight of the postal system is bound to spawn a large and bureaucratic machine.  Would an entirely separate smaller, lighter and above all more ‘digital’ agency not be more effective at both managing the current UK spectrum space and planning for its future?  David Cameron certainly though so in July 2009 when he pledged to remove the policy making function and return them to the Department for Culture Media and Sport. Perhaps it’s time to reverse the 2001 decision to form this “super-regulator”.

Encouraging large scale, nationwide infrastructure is not always cheaper or more efficient either.  Under the present scheme millions of pounds have been allocated to BT / OpenReach to roll out a high speed network, yet the results have been variable and have by no means provided universal coverage in targeted areas. 

Photo by Josh Appel on Unsplash

Fast broadband access has long been regarded as the fourth utility and one that it is increasingly hard to live without.  For both professional and personal interactions we are increasingly dependent upon it and with the governments musings around creating a legal right to fast broadband it is only going to become more of a ‘hot button’  and emotive topic.  But the solution perused at present – offering supersized BDUK grants to fund the enhancement and expansion of largely existing infrastructure networks (and largely granted to that of the incumbent BT), is not the solution.

Underlying this debate is the fact that large scale infrastructure upgrades are, and always will be expensive.  BT themselves said it would not be commercially viable to improve their network in some areas without the intervention of government support.  However by subsidising this through grants the true cost of is masked and the free market, that should encourage and stimulate smaller, innovative firms to compete, is damaged.  Consumer broadband costs largely do not reflect the cost of creating this infrastructure and the ‘bundle sale’, whereby the cost of broadband is made negligible by grouping it in with other services such as calls, TV and line rental, acts only to muddle the position further.

Encouraging large scale, nationwide infrastructure is not always cheaper or more efficient either.  Under the present scheme millions of pounds have been allocated to BT / OpenReach to roll out a high speed network, yet the results have been variable and have by no means provided universal coverage in targeted areas.  Furthermore access to this infrastructure for competing enterprises is woefully limited and bureaucratic, stifling competition – a key issue to the debate on BT / OpenReach’s joint or separated future.  Whilst it can be argued that these supersized infrastructure projects and matching grants are keeping prices down for consumers it can equally be argued that competition would have a similar effect.  Whilst grants may be necessary to make connecting some areas commercially viable, reducing the size of these and allowing firms to bid for smaller parts of them is likely to encourage the emergence of competitive, innovative and lower cost solutions.

Smaller networks are also often more efficient, more customer focused and more in tune with the genuine needs and requirements of the communities they serve.  Customer service for these smaller and micro providers is key and, as the BT rollout continues to storm on, a primary area by which they compete.  Where there is an alternative for customers they can, and do, demand a better all round service.  Smaller providers are often far better and more agile at delivering this.

The drive to focus on ultrafast (100Mbps to 300Mbps) is also questionable.  Much is made of this offering, yet recent statistics show take up of these services to be comparatively low (the official statistics can be found here http://media.ofcom.org.uk/facts/).  Ultrafast speeds may make infrastructure improvements look and sound impressive, but at present, are largely surplus to the requirements of the average consumer.  High reliability and low contention would be a better focus in the medium term.  Agile providers often will focus on this model but in a market where consumer education is heavily focused on the ‘faster equals better’ model, competing with these high budget marketing campaigns is a hard sell.

It is also easy to forget that a large number of people are still lacking even the basic usable broadband service (10 or 24Mbps) that the grants system was established to resolve.  The much discussed ‘remaining five percent’ is a big number (over a million house holds).  With the trend towards more remote working and the apparent overcrowding of many of our cities this problem is only going to become more apparent.  This five percent is also not only in the rural country side – pockets of London and other major cities suffer from poor broadband speeds.  Supersized grants have so far failed to make a significant enough impact on these numbers.

Furthermore the scale and administration of these grants makes it harder still for competition in the market to become more prominent.  The size and clout of the incumbent makes them the obvious choice for government and local councils to turn to, to meet their obligations in providing superfast connectivity, and deters them from looking to assess and trial alternative methods for delivery.  Equally these smaller firms may lack the resources to build a pitch of the size and quality of the large providers.  For farmers, the very definition of the rural 5%, the NFU has long supported satellite as the solution as it is both fast, and low cost to install.  Perhaps therefore the answer lies not only in removing or further dividing the grants, but in putting more of the money directly into the hands of the consumer.  By doing so all providers would be forced to compete in a local market, helping to level the field.  Further roll out of the connection voucher scheme would quickly and efficiently encourage this.

Through the current grants provision the tax payer is primarily funding the long term network improvement of a privately owned company and in doing so is supporting the monopolisation of an increasingly key national infrastructure.  Providing high speed connectivity to all is an admirable idea but is not, through the current method, being delivered.

When I first started using AWS ec2 instances to host small sites (where the database and web server where on the same box) i was often surprised by how often MySQL kept falling over – WordPress in particular would show the ‘Error establishing a database connection’ message frequently and the MySQL service would need a kick.  It was particularly odd given that almost identical boxes hosted on Rackspace where coping fine.  After a little rummaging around it became apparent that many EC2 instances don’t come with any swap space by default.

Swap space  (or a swap file) is a space on the hard drive that is used as an extension of the RAM assigned to the device.  Essentially when the device runs out of physical RAM it can use the swap space assigned as an extension / overflow.  It’s much slower than physical RAM so is only advisable as an overflow.

This means that when the memory reaches capacity, there’s no where to go, and MySQL don’t like that.  The solution is simply to add swap space.   On the default AWS AMI (CentOS) you add swap space as follows (as root or sudo):

/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1

Where 1024 assigns 1GB of swap space.  Increase that as you please.

To make sure this is maintained on reboot too add the following line to your fstab (/etc/fstab):

/var/swap.1 swap swap defaults 0 0

You can check this has worked using the ‘free’ command which should now list swap space below the physical memory

 free -m 

Many thanks to all the Stack Overflow posters who jogged my memory on the syntax of this!

Over at The Constant Media we’ve been working on a project that relies on using PHP to connect to a customers SYBASE database. Essentially they have a proprietary system in place that manages many key aspects of their business and their website, as a key sales channel, needs to interact with it.

If you are pretty confident with linux and don't want to work your way though this there is a command list gist here.

SYBASE is not something we had come across before, and whilst the actual PHP code it’s self isn’t all that tricky to implement, getting SYBASE and it’s dependencies compiled into PHP can be a little fiddly. Below is a step-by-step of how we did this. It’s based around CentOS (our server OS of choice) but should be fairly transferable.

Before I go to far into this i should point out that much of the credit due goes to @andrew_heron, my business partner at The Constant Media, who did much of the hard work and leg work on this project and the initial server builds. I really only got involved when we came to build the production and staging servers and made the server a little more production ready.

The first thing to point out is that to do this you need to compile PHP from the source code and cannot (to my knowledge) do this though yum. We are running this on a CentOS 6 box which was vanilla at the point of install. If you have PHP installed at the moment (though rpm’s or yum) scrub it off before wading in.

Frist, lets update the box and install a few pre-requisites. Because we are compiling from source we are going to need a few dev libraries too (devels). As root or sudo if you prefer:

 $ yum install gcc openssl-devel pcre-devel libxml2-devel httpd httpd-devel mod_ssl libcurl-devel libpng-devel
$ yum groupinstall "Development Tools"

Installing FreeTDS
SYBASE has a dependency on FreeRDS to so we need to start by downloading, configuring and installing this. All source files are going to be stored in /usr/src throughout.

Note: Some of these version may be out of date when you get to it so you may need to adapt things a little to accommodate this.

Download and unpack FreeTDS:

 $ cd /usr/src
$ wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-patched.tar.gz
$ tar -zxvf freetds-*
$ rm -rf freetds-*.tar.gz
$ cd freetds-*

Next we configure FreeTDS, specifying the path where we whish to install it. Once configured we can ‘make’ and install the package. Watch for errors at each stage and rectify any that occur before proceeding.

 $ ./configure --prefix=/usr/local/freetds
$ make
$ make install

Installing APR & APR-Util
The next dependencies needed are APR and APR-Util.

APR

 $ cd /usr/src
$ wget http://apache.mirror.anlx.net/apr/apr-1.5.2.tar.gz
$ tar -zxvf apr-*
$ rm -rf apr-*.tar.gz
$ cd apr-*

Again configure and install, making sure to fix any errors as you go:

 $ ./configure
$ make
$ make install

APR-Util

 $ cd /usr/src
$ wget http://apache.mirror.anlx.net//apr/apr-util-1.5.4.tar.gz
$ tar -zxvf apr-util-1.5.4.tar.gz
$ rm -rf apr-util-1.5.4.tar.gz
$ cd apr-util-1.5.4/

Configure and install.

 $ ./configure --with-apr=/usr/local/apr
$ make
$ make install

If you have any other dependencies outside the norm then now’s the time to go install them.

Downloading and installing PHP
We’ve chosen to run with PHP 5.6 as we feel it will be secure and stable enough in our environment. If you need an earlier or later version the principles should be the same, just insert it into the wget below.

 $ cd /usr/src
$ wget http://uk1.php.net/get/php-5.6.8.tar.gz
$ mv mirror php-5.6.8.tar.gz
$ tar -zxvf php-5.6.8.tar.gz
$ rm -rf php-5.6.8.tar.gz
$ cd php-5.6.8

The next command is the most important, and most problematic stage of the install. We have chosen some pretty standard includes (php-mysql, php-mysqli, php-mbstring, php-pdo, php-gd and php-curl) but if you need more just add them in.

For the SYSBASE extension to function you need to specify the path to your FreeTDS install. This is the path you specified above in the –prefix flag. As the Apache install is pretty standard we’ve chosen to install this via yum (to keep things simple). For this to work though you need to make sure that httpd-devel is installed (the httpd dev library installed above) and know the path to apxs which comes as part of that library. By standard apxs is located at /usr/sbin/apxs but if by chance it’s not there you should be able to find it with a find ‘find / -name ‘apxs”. If that doesn’t return the path then you probably haven’t installed httpd-devel correctly.

The last peramiter we’ve added is –with-config-file-path. This is where your php.ini file will live – we’ve specified this as /etc/php.ini as it’s a fairly standard and thus makes it easier for others to find.

Once you have adjusted this command to your needs keep a record of it somewhere safe as you will need to re-run it should you ever need to add more modules or upgrade PHP.

 $ ./configure --with-sybase_ct=/usr/local/freetds --with-apxs2=/usr/sbin/apxs --with-mysql --with-mysqli --enable-mbstring --with-pdo-mysql --with-openssl --with-curl --with-gd --with-config-file-path=/etc/php.ini

Assuming no errors where returned by the command above you can now make and install PHP:

 $ make
$ make test
$ make install

If you need a template php.ini you can normally find one in/usr/src/php-yourversion/php.ini-development. Failing that just grab one off the web.

PHP should now be installed and working. Running php -v will confirm this for you and you can check the version installed matches your expectations.

Linking PHP & Apache
Lastly, you’ll want to let apache know about your PHP install. To do this just add the following into /etc/httpd/conf/httpd.conf

 #
# &nbsp; Enable PHP
#
AddType&nbsp; application/x-httpd-php &nbsp; &nbsp; &nbsp; &nbsp; .php

It’s also worth adding index.php to the list of accepted index files.

Restart httpd and you should be good to go. Place a phpinfo() file on your server, scan down the list and you should see sybase listed.

 $ service httpd start

When hosting websites on EC2 instances it’s pretty common to need to point multiple Elastic IPs to a single EC2 instance, normally to allow the use of multiple SSL certificates.  This is pretty easy to do, but a little confusing at fist if your not used to the sysadmin world.

It’s important to understand that each NIC (network interface) can only have a single elastic IP address bound to it.  Most instances are not launched with spare NICs attached and as such you will have to create an attach and additional NIC to which you can associate (point) the additional elastic IP.

Note: The number of NIC's you can attached to an EC2 instance is limited by the size of the instance.  For example a micro instance can at the point of writing only support two NICs (there for limiting you to using only two elastic IP's).  You can get around this by using a load balancer.

Creating & attaching an additional Network Interface

  1. First log into your AWS account and pull up the EC2 Dashboard.
  2. From there select ‘Network Interfaces’ under Network & Security tab on the left hand menu and click ‘Create Network Interface’ (the big blue button at the top).
  3. A pop up will appear and you can name the new interface something meaningful to you.  Then add it to the subnet that the EC2 server is currently in.  (If your not sure which subnet this is you can find it by looking at the instance details on the ‘Instances’ page).
  4. Once you have selected a subnet the security groups available on that subnet will be listed.  Select the groups to all through the traffic you need (you can always add more / change this later if you need too).
  5. If you want to manually assign the private IP address you can do so at this stage, but I tend to leave it blank which will auto assign an address for you out of the VPC’s range.

Once you have created the NIC pull up its details from the list and make a note of the Primary Private IP and the Network Interface ID. The Primary private IP is where your EC2 instance will see the traffic as originating from.  If you need to set up SSL certificates for example it is this private IP that you will listen for / specify in the config file, not the elastic IP address.

Next you need to attached this new NIC to your EC2 instance. To do this select it from the list and chose the ‘Attach’ button at the top of the page. Select the instance you want to attache the NIC to from the dropdown list and click ‘Attach’. At this point the NIC will be attached to the instance and be ready to receive / send traffic.

Associating an Elastic IP

You can now head over to the Elastic IPs section (on the left nav). If you have a spare IP listed you can use this or alternatively you can click ‘Allocate New Address’ to create an additional one.  Select the elastic IP from the list and using the Network Interface ID you noted earlier (when you created the NIC) find the interface in the network interface field and hit ‘Associate’.

Your done! The elastic IP will now pass traffic to your instance, and the instance will identify this traffic as coming from the private IP you noted earlier.

 

Note: Whilst the number of NIC's you can use are limited by the instance's size, Elastic IPs are limited per account (by default to five). This limit can be increased by raising a support ticket so long as you can justify the need.

There’s a great (and free) command line tool set called s3cmd that makes it simple to push and pull files to AWS S3 buckets, and is easy to script with.  We use it for cheep, reliable, offsite backups of media and database files.

The tool set can be downloaded form the GitHub repo here.  There’s a simple howto guide at the bottom.  One slight bump we did run into is that some of the older versions struggle with larger file sizes so make sure you get version 1.5.0-alpha3 at the minimum.

To install the tool simply download the repo onto your server / laptop and cd into the directory to run s3cmd --configure. You’ll need to have generated iAM credentials through the AWS control panel first.  Once you’ve got it configured you can push files to a bucket with the following command:
s3cmd put /local/file/path s3://bucket-name

Below is a super simple (and fairly crude) bash script we call by cron every night that backups all db’s on a server and sends the backups to s3:

#!/bin/bash
echo "------- Starting " $(date) " -------"
rm -rf /backups/*.out
cd /backups/
mysqldump --all-databases -uroot -ppassword > $(date +%m%d%Y).out
cd /root/s3cmd-1.5.0-alpha3
./s3cmd put /backups/$(date +%m%d%Y).out s3://backups
echo "------- Finished " $(date) " -------"

It’s also good for backing up crucial log files in environments where a dedicate syslog server isn’t really justifiable or is perhaps a little too pricy.

Side note – you can also use this to push data into s3 that is to be served through cloudfront, making scripting media into a CDN simple.

As a web developer from time to time you inevitably have to cleans a database table of a few records – be they test data, corrupt data or whatever. Very occasionally (and rather unnervingly) this sometimes has to be done in a live environment on a busy table (sign up data etc etc). Standard practice is to take a backup of the table first. Inevitably sometimes it doesn’t quite go to plan, and a few days later you find there’s some data you need back.

The query bellow is a simple one that will copy all data from the backup table, that is not in the live table, back into the live table. This leaves any additional or modified data in the live table in tact. The only pre-requisite is that there is a unique ID column (in the query below ‘ID’) for which to reference against.

INSERT INTO livetable SELECT * FROM backupoflivetable WHERE ID NOT IN(SELECT ID FROM livetable)