What were you doing 20 years ago? I know exactly what I was doing on the 9th October 1999. I was turning up to my first job; having graduated university earlier in the year, that job was here at the Telegraph. I’d had a 10-minute phone interview the day before and had agreed to come in for a day to write some HTML for the Telegraph Website. I was excited to work for a household name in my first real job. It hardly seems possible that I am still here 20 years later.
So yes, I’ve worked at the same company for 20 years. It sounds incredible in this rapidly changing digital world that somebody could stay at the same company for that long. The truth is that The Telegraph has been good to me; I’ve had the opportunity to grow and progress to different roles during my time here so I’ve always felt challenged and interested. I’ve also worked with many of the different departments that make up the business during my time here and have enjoyed the diversity of the work. The media industry itself is also fascinating, there really isn’t ever a dull moment.
Spending this length of time at a large publishing company has meant that I have seen some massive changes in the industry, particularly regarding the technology that is used to produce websites, but I’ve also witnessed how the focus of a media business has changed. What follows is a kind of potted history charting the changes in technology, my role and business priorities of the Telegraph Website.
1994–1999: Hand coded
Before my time
The “Electronic Telegraph”, as it used to be known, was born at the end of 1994 and was the first UK Newspaper website. From what I gleaned from colleagues just after I started, it was produced by a few diligent people by hand coding HTML. It was essentially a single page with a few links off to the top stories of the day and there was very little use of images. This obviously meant the skillset for publishing the pages was entirely technical and manual, and therefore prone to error.
I don’t believe the website was taken particularly seriously by the company at the time. After all, the newspaper itself had been in print for about 150 years and was where all the business focus was. A few technically-minded people saw that the web was something new that should be explored, and so they made and updated the first pages. I imagine even they would have been surprised at how far we’ve come in the past 25 years.
When I joined
I joined the Telegraph shortly before the five year anniversary of the website. This was back when Apple made quirky see-through desktop computers for designers, people used “Ask Jeeves” to search the internet instead of Google, you could only buy books from Amazon and there was no such thing as a social network let alone Facebook, Twitter or Snapchat.
One of my first tasks was to get to grips with the software used to generate the website. It went by the unlikely name of Zonk and had been built by one of the production editors, a clever chap called Tim Brown. It largely consisted of Word Macros, some scripts and his own templating format. Editors would write their content into a Word doc and press some macro buttons in the toolbar to markup the different fields of the article, for example the headline, the byline, the body text etc. Once complete, one final macro button (with a smiley face icon) would merge the contents in the Word doc with an HTML template inserting the article fields in the correct places. The template itself was selected based on the filename of the Word doc (a football story would have a filename starting with sf — sport, football, for example). There were additional scripts to pull together the section pages with lists of articles that were published that day for each section of the website.
Once the HTML files were generated, editors would often make a few tweaks by hand (with varying degrees of success), to get them looking exactly as they wanted. When the whole edition was ready the production journalists would FTP the files up to the server and that was that.
Publishing was still very much focused on getting most of the articles from the newspaper online overnight. When big news stories occurred in the day there would be an editor to write it up and get it on the homepage. The bulk of the content was direct from the paper, taken from old green and black display Atex terminals. Not much was written just for the website, and I’m pretty sure nothing that was written for the web would ever make it into the newspaper.
Although the templating was limited and quite brittle it was a step up from hand coding HTML. This system did at least allow the journalists a way to build their articles and see what they were going to look like on the web page without relying on someone else.
This was still not really what you would call Content Management. The content was essentially stored only in Word and HTML format, in a file system with little organisation and a very basic structure.
2001–2007: Teamsite & Dynamo
My first big project
Shortly after I joined it became clear that we needed a more structured way to store our content and a better way to deliver a dynamic website. So during 2001 with the help of Price Waterhouse Coopers we started implementing a real Content Management System (CMS). This took the form of Interwoven Teamsite, a browser based file management tool in which we managed our content as XML files. We didn’t expect, or want, editors to have to edit XML by hand so we plugged in a piece of software called XMetaL that provided a not-quite WYSIWYG editing interface. Editors could click preview to see how the web page would look whilst editing.
So now we had the content and an editor we needed a way to deliver a dynamic website, for this we used ATG Dynamo, a Java based application server. This used a combination of JSP and XSLT to turn the XML content into HTML web pages and deliver them. I had many frustrating days battling with some very complex XSLTs back then, but one day specifically sticks in my mind. On that horrific day of September 11th 2001, I remember being phoned (on my decidedly un-smart Nokia) in the evening by the website editor and being asked to come into work as I was the only one capable of coding up a new layout for the homepage in order to display the shocking photos of the collapsing World Trade Center towers. The web was starting to be the first place to look for breaking news, so trusted Newspaper brands, along with the BBC, were the sites to go to for the first news of those terrible events. Being first for breaking news was, and still is, a key part of online journalism.
There were a number of redesigns of the website during this time as you can see from the screenshots here.
Lead Developer/Development Manager
By 2007 the ecosystem of add-ons that had been built around Teamsite and ATG Dynamo had reached the point where it was difficult to maintain and clumsy to use. The Telegraph had also recently moved offices from Canary Wharf to a new more open plan office in Victoria. At the same time we had embraced the ethos of a single newsroom with journalists needing to be able to publish both digitally and in print. It became clear we needed a more up to date solution to deliver the website.
A Norwegian company pitched to us their newspaper specific CMS, Escenic, that was used widely throughout Scandinavia (and still is). Around the same time Escenic was starting to be picked up by other UK publishers, so it was chosen as the new Telegraph web platform. The product was an all-in-one CMS and site delivery platform, again Java/JSP based with content stored in a relational database. The editing interface was a Java thick client, started from a browser window, that allowed the editors to create new articles and populate their fields by entering text, checking option boxes or applying classifications.
I was given the lead developer role for the implementation of this new platform so I set about learning its capabilities. I quickly discovered that like most customisable systems its success depends largely on the implementation. It relied heavily on the implementer creating a good data model for content types (eg. articles, galleries etc) and a good set of templates for displaying these types. We spent a long time analysing design mockups and determining how to translate those requirements into concepts within the Escenic system. We delivered the new site in 2008 and I still count this as one of the best achievements in my career. For the first time it included a Mobile compatible version, it was clear that mobile was going to be important but I don’t think anyone realised how massively it was going to change the technology world.
Anyone who has moved from one CMS to another will know the pain of Data Migration. For this project we decided to migrate all our content from the year 2000 onwards on to the new system, the older content was only available in raw HTML format which was difficult to process programmatically due to the hand editing that had been part of the daily process. The content migration was a massive undertaking and caused many headaches in the team but ultimately we moved a large amount of content from XML files into the Escenic Database structure.
Another big change here was the focus on video content and embeds from social platforms like Twitter.
One of the biggest changes in the architecture was introduced during the life of the Escenic website, the introduction of Akamai as a CDN. This offloaded more than 95% of the traffic to edge servers greatly reducing the volume of requests our servers had to deal with. It also provided us with a greater level of protection against denial of service attacks as no direct traffic to our origin servers was allowed except from Akamai’s edge servers.
It was also during this time that we introduced the concept of a Paywall. Other UK publishers decided on a hard paywall that meant all content was invisible to readers without a subscription. The Telegraph opted for the more open, and SEO friendly, metered paywall approach where users could see a certain number of articles free a month.
A third big change during this period was the move to using cloud technologies. We swapped a large amount of on premises hardware for AWS hosted servers. This was a massive undertaking but gave some great benefits including auto scaling during peak times, and better management of development environments.
This was also when we started to need to provide our content out to other platforms. We built our first simple content API on this platform and used it to provide syndication feeds to partners as well as to our own mobile app.
I moved to the role of Web Development Manager during this time, managing the development team but also remaining technical. I think it helped me immensely to have been instrumental in designing the system that the developers were working on, as this enabled me to give advice and help where needed.
The Escenic product evolved quite a bit during those eight years, with a few major version releases that improved the out-of-the-box functionality. It was, however, not a simple upgrade path to move to the latest version of the product and we had a few failed false starts, leaving us with multiple versions running different parts of the website. I think this is a challenge for a lot of software development companies; how can they update their product and keep their customers on the latest version, especially when CMS platforms are usually heavily customisable.
In 2011, I was given a choice to remain a manager or to maintain and increase my technical knowledge and move to the newly formed Architecture team. Had I opted for the management route I would possibly have left the Telegraph shortly after as it wasn’t long before the Telegraph outsourced all development to another company. I opted for the more technical route and worked with a number of development companies and eventually our own team again when we rebuilt our in-house dev team during the next CMS implementation.
2016-present: Adobe Experience Manager
Eight years of growing and evolving the Escenic system had led to a lot of customisations and CMS options and made the whole system slow and unwieldy to use. The search started for a new tool to increase the capabilities of the site and bring it once more up to date. One of the biggest drivers was mobile traffic increase and the desire to move towards a fully responsive website.
Adobe Experience Manager (previously known as CQ) was the chosen software this time due to its consistently high placing in industry reports and Adobe’s general pedigree in media technology. AEM is again Java based, but this time instead of a traditional database it relies on the Java Content Repository (JCR) to store everything from its content to the template code.
AEM is an extremely powerful tool but with that comes some complexity in the user interface. Pages could be created and components moved around on the page in almost infinite combinations, this meant the simple act of writing a news story could take a large number of steps. For this reason we opted to use the APIs provided by AEM and build our own simplified Authoring experience on top of it. This greatly increased the editors speed of publishing and gave them an interface that they really felt they had control over — being involved in the design and implementation from the very beginning.
When the discussions moved again to data migration we sought advice from our SEO team. The advice was actually to leave the old content where it was, migrate a selected set of articles for each channel and then push on with creating new content. I think our status with Google as a trusted news source helped us quite a lot here but this strategy seemed to work fine, apart from still having an old platform running to this day (and still having the discussions a couple of times a year about how to move that old data).
As I mentioned, mobile traffic had become key by this time, so everything we designed was built to display responsively on any sized device screen.
We originally implemented AEM ourselves, hosting it in the AWS cloud, but have recently made the move to have Adobe manage the platform for us in their own cloud. In this way we effectively have a CMS as a service that we don’t need to worry about managing. This takes some of the headaches away from us around day-to-day running, scaling and monitoring of the platform.
AEM is now the source of content for the website, mobile app, Apple News, Google AMP, Amazon Alexa and is also for numerous syndication partners. Our API built on top of the platform should be able to service any new platforms that surface in the next few years.
Our business focus has changed in the last couple of years, too. Back in the Escenic days we were targeting volume of page views and unique users. These were the important numbers as most revenue came from display advertising (banner ads etc). Since the introduction of the metered paywall I mentioned above we were balancing the volume of page views with numbers of paying users. Last year we made a concerted effort to get more Registered users and now we are focusing on encouraging users to engage and buy subscriptions. The current company goal is to have 10 million engaged registered users and 1 million engaged subscribers by 2023.
Expect an off-the-shelf CMS to last about 5 years; at the end of this time, technology will have moved on or you will have customised the system so much that it is starting to show signs of age, such as slowing down or becoming increasingly difficult to manage.
Try to keep up with upgrades if at all possible to maximise the life of your CMS. As mentioned above, some software companies are better than others at helping their customers stay on the newest product.
If the new CMS version does change so drastically that doing an upgrade becomes as difficult as implementing a whole new product then revisit the marketplace and see what other products are out there that may be worth the upheaval of a replacement project.
It’s almost certain that any product you buy off the shelf will not fulfill all of your requirements. Most products will be customisable to give you what you want, but be aware that customisation leads to complexity, and ever-increasing complexity is never sustainable.
Data migration is hard. No matter how you prepare for it, or how well your data is structured never underestimate the difficulty of getting your content from one system to another. If you do choose to keep an archive on a legacy platform as a tactical move, have a strategy to migrate it eventually.
Make sure you can serve the platforms that are important to you now, but try to make sure you are working in a way that could support future platforms and requirements. I know it’s difficult to predict the future but these are the industries we work in. Both Media and Technology change rapidly, try to have a CMS that will support new platforms as much as possible (good APIs are a good start).
Treat your internal users of the CMS (in our case our journalists) as your customers. Yes our readers are important, but without a happy group of journalists to produce content our business is lost. Many users will gripe about a system between themselves without raising official bugs or feature requests. They’ll also find interesting new ways to use a product that would never have been predicted. Talk to them about how they use the system, what bothers them and try to keep them happy.
Expect to do redesigns frequently. As you can see from the screenshots here, what used to look good dates very quickly in the web world. Be careful to understand the impacts of the redesign on the editorial process. A small design change can mean a lot more work for the people producing the website, sometimes a compromise on design is necessary to make it a workable solution.
I would estimate that about every 3 years we have done some form of redesign on the website and the only certainty is that some readers won’t like it — at least the ones that give you feedback. Make sure your CMS implementation is able to cater for design changes easily.
Take screenshots of everything in case you need to write a blog twenty years in the future. Sadly I didn’t think to screenshot some of our older CMSs while we were using them, but it would have been nice to see the evolution of the CMS interface over the years.
Some final thoughts
Certainly the editorial experience for writing and publishing online has changed dramatically for the better at the Telegraph over the last 20 years. But also the landscape of digital media has meant that content can now be published and promoted from a single CMS to many different devices and platforms. The COPE (Create Once Publish Everywhere) paradigm is being pushed to its limits and is ever harder to attain. The Telegraph now publishes content on platforms as diverse as Snapchat and Amazon Alexa, both are a world away from a traditional newspaper article in their presentation. Catering for these types of platforms has been tricky and we are not yet at the point where the same piece of content can be repurposed to every platform, but we are getting there. I believe this challenge will only grow in the years to come and it will certainly be part of my focus now as an Architect at the Telegraph to ensure we move in the right direction with respect to Content Management.
A few people have asked me how I have managed to spend twenty years at the same company. There is a famous saying that I think sums up my approach to work and life that says it better than I could.
“Give me the serenity to accept the things that I cannot change, the courage to change the things that I can, and the wisdom to know the difference” — Reinhold Niebuhr.