A brief history of our digital platform

Over the last 5 years, Trinity Mirror has moved from an in-house CMS to a third-party solution, and from a traditional hosting provider to cloud hosting. This introduction gives an overview of the technology challenges we faced, how our web platform technology has evolved and the different set of challenges we face now.

In-house with traditional hosting

Prior to 2011, nearly all our news websites were powered by an bespoke, in-house CMS solution written in ColdFusion. The CMS had thousands of lines of legacy code — we’d previously used it for classified advertising as well as content solutions — and it was becoming more and more difficult to maintain and develop.

We also had two distinct and different systems: one which delivered our nationals — such as Mirror and Daily Record — and one for our regionals — including Birmingham Mail and Liverpool Echo. We had two small and separate development teams working on both. The CMS needed significant investment on the user interface and we had to decide whether to allocate our small development resource to doing that or change CMS altogether.

The CMS and websites were all hosted in the traditional manner — through a hosting provider using dedicated servers, network switches, routers and firewalls which we purchased. To add servers to the estate required raising capex paperwork which would take some time to be signed off. An order would be raised and there would be lead times on delivery of the kit.

Typically, the above process would take 3 months from start to finish. If we needed to purchase a cabinet to host the hardware, it took even longer. It just wasn’t agile and it was extremely difficult to react to performance issues that required hardware solutions. In addition, we hosted in a single data centre and we’d suffered from a number of outages which were impacting on our reputation and, more importantly, revenues.

Changing our CMS and hosting solution

In late 2010, we developed a business case to replace the existing CMS and to change our hosting strategy to move to cloud hosting. Building the business case to replace the CMS was not easy. There were no obvious cost savings or revenue benefits, but there was real appetite from our editorial teams to move away from the existing CMS and that helped enormously. It was presented as a strategic proposal and we received the backing to go ahead.

We had already identified the CMS solution that we wanted — Escenic. We had launched a single site — Mirror Football — on Escenic in 2009. This was effectively a pilot for the CMS and for a separate football site and helped to inform our future strategy for both.

We spent a great deal of time and effort assessing open source, open source/commercial and commercial CMSs in the marketplace. In the end, we chose an off-the-shelf commercial product. Escenic was a proven product in the publisher market, and we were the first customer to launch a site on the latest version (version 5).

The project to roll out the new CMS started in 2011, following the business case sign off. The business case included a redesign of the sites.

Amazon Web Services (AWS) was selected as our preferred hosting provider. We considered a few alternatives, but nothing could compare to the self-serve functionality and range of services AWS provided at that time. Another big attraction was being able to use MySQL RDS across multiple zones so we also had data centre resilience. As well as moving our hosting to AWS, we moved our DNS management to Route 53. Previously this was managed by our hosting provider, but now it is all managed in-house using the Route 53 tools.

We were already using Akamai for our Content Delivery Network provider and we’ve retained its services to date. We did consider CloudFront, but at the time it didn’t provide some of the services we required, such as mobile handset detection.

To de-risk some of the technology challenges with a new CMS and a new hosting environment, we migrated a smaller web site with celebrity content, 3am.co.uk, to Escenic first. This launched successfully towards the end of 2011 and we were able to identify some performance issues with search (Solr) which we resolved by increasing the spec and number of servers dealing with the Solr requests. AWS also suffered a data centre outage during this period in Dublin. It was the only time we have experienced this with AWS but our site remained up as it seamlessly failed over to the other Dublin data centre. It was a good (unplanned) test of our setup.

We launched the new Mirror website on Escenic and at AWS in February 2012. This included folding the Mirror Football site back into the main Mirror site. As with many launches, it didn’t go quite as smoothly as we hoped. Due to a performance issue with our implementation of Solr, we experienced broken pages and opted to back out. However, and this was a testament to the skills in our team plus the AWS solutions we now had in our armoury, we were able to successfully launch later that day by adding extra caching servers in front of Solr and also extra presentation servers — something we would not have been able to do with our previous hosting arrangement.

Rollout, development and growth

The Mirror launch validated our hosting choice and we spent much of 2012 and 2013 rolling out Escenic for around 35 sites. This included consolidating some of our smaller sites into larger sites that covered a wider geographical area and therefore benefitted from a larger audience. Prior to the rollout we had more than 70 sites.

We didn’t just focus on rollouts. During this period, we also built a mobile version of the site, developed and delivered commercial products and started work on a Match Centre for better football coverage.

Match Centre exposed a particular weakness of Escenic. It is not easy to ingest third-party content, such as live football updates, and match it against our own reporters’ live updates. We were also feeding our apps through RSS feeds which was less than ideal. We were often breaking our mobile apps through work we had undertaken on our websites.

In late 2013 we identified the need for a solution that consumes APIs from various sources and provides APIs to our clients. We employed a solutions architect to help us with the build and chose to develop this using Node.js. We named this layer Pulveriser, because it takes data from various sources and ‘pulverises’ it together to provide meaningful data for our web and mobile products.

During 2014 and 2015, we in-sourced our mobile app development and at the end of last year we completed the launch of our new mobile apps on iOS and Android.

Where are we today?

We’re a much different team to the one that started this journey in 2009. Back then, we were part of the Group IT Trinity Mirror team and numbered around 20. This included developers, testers, app support, sys admin. Now, we’re an engineering team with just over 70 staff including a near-shore engagement with Endava.

The Engineering team is no longer part of the Group IT team and has the same reporting line as the Digital Product team. We are much more closely aligned to the Product team and are effectively working together as one team to deliver our web and mobile products. We’ve also moved from delivering products using a project-based approach to dedicated cross-functional product delivery teams.

From a technology perspective, we continue to use Escenic, AWS and Akamai and to develop our API layer using Node.js. We have a shared single code base across all our sites, although we continue to have a separate Nationals and Regionals system. This is mainly due to some limitations on the number of sites within a single instance of Escenic.

We have a vision to move towards a microservices approach for all our products. We’re exploring ways of splitting our Escenic CMS up into individual services and have had some success with this, with the recent launch of WowBrum, a new food and entertainment site for Birmingham. We believe we’re the first Escenic customer to deliver Escenic as a service and also the first customer to autoscale Escenic.

To further our services ambitions, we’re also exploring using event sourcing technology from AWS, called Kinesis. If we are successful, we will effectively be ‘CMS agnostic’ and able to completely separate our front end from the CMS backend.

More updates soon…