Element: The digital unification of Time inc.

How difficult is it to roll out a new technology platform for an organization spread across 22 brands with almost a hundred years of history?

Very. It’s very difficult.

It takes a commitment at every level of the organization to make this happen. It takes sacrificing the product road map and replacing other technical platforms. Most of all it takes collaboration across the entire company and the support of the highest levels of leadership.

Element is the now front end platform for all of the former Time inc. digital brands, and is in the process of being moved onto Meredith Corp’s brands.

So where did it all start…?

Upsetting the Apple Cart

Project code name was Apple Cart at the outset.

I remember a morning just after the annual Holiday party (in late February 2017), the SVP for Digital Product & Engineering and my new boss, Nicholas Butterworth, came to me with a question…

“What would it take to unify all our sites on a single platform … by June?”

Jokingly, I responded, “You are kidding, right?”. He wasn’t. Jen Wong, our COO at the time, considered this the most important thing we needed as a digital business. We needed to get this done. Whatever it took.

“It would take everything we have. We’d have to throw out all the product road maps, stop work on any other unification projects, and I’d need a team.”

Finally closing with, “This is really going to upset the apple cart.” Apple Cart became the code name we used to refer to the project while we planned how we were going to do this.

Why Element?

The need to unify all the brands onto one front end platform was driven by a number of reasons.

225 Liberty Time inc. main entrance before the Meredith acquisition.

Time inc. needed to stay nimble and agile in an ever-changing and demanding industry landscape. It would require providing a better, faster experience for our users, while creating a consistent cross brand platform for our sellers.

Competing with start-up niche publishers such as BuzzFeed, Business Insider and Bleacher Report; or titans like the Washington Post backed by the bottomless money of Amazon, is a challenge in itself.

We had to develop enterprise publishing products at enormous scale and a product culture centered around collaboration and mutual benefits.

The Challenge

The Time inc. sites were on a variety of different platforms, ranging from old Drupal themes to a React front end. The challenge was to migrate all of these onto a common front end by the end of June 2017. It was March!

Where a lot of projects of this magnitude fail, is trying to unify everything at once — migrating CMS’s, data normalization and unifying the front end. It’s a tough enough task if you are talking about one site, let alone 22.

In order to mitigate this, we made the deliberate decision to exclude the migration of content and unification CMS, opting instead to normalize the data via an abstracted API layer. We also targeted only the high traffic templates — Articles, Recipes, Vertical & Horizontal Galleries. This helped contain scope and provided focus for the teams.

Choosing the Platform

Choosing the platform was obviously an incredibly important piece of the puzzle. One wrong decision could have cost months in delays in both developer on-boarding and productivity. The primary requirements were:

Fast Roll Out

  • 2 full site redesigns +2 sites on core templates in production by 6/30
  • 18 sites in QA by 6/30 (core templates)
  • Support for rapid technical deployment

High Developer Productivity

  • Support rapid contribution by all brand engineering teams
  • Easy to onboard new developers

Forward Compatibility

  • Provide a stable base for ongoing iteration and integration of new technologies

Given the tight time frame there was no option to build this from scratch. The Brand Product & Engineering teams had already been on a path to unify brands within their portfolios, so we looked to these platforms as the base. The choices were:

WordPress
Traditional PHP stack, hosted by WP VIP, parent/child architecture, shared plugins architecture, non-DCMS compatible, strong ops support from vendor.

Node/Handlebars
A headless Node JS stack, Handlebars components, server-side rendering, large component library, supports all templates for 3 x full sites in production (article, video, search, home, section, tag, gallery, etc), API layer architecture, Drupal integration, layout tools, CoffeeScript.

React
A headless Node JS stack, React architecture, server-side rendering, large component library, supports major templates for 3 x brands (article, video, section, tag, gallery, etc), WP API integration, fast page performance.

Drupal/PatternLabs
PatternLabs running in server-rendered Drupal PHP stack, master/brand branch mechanism, supports 1 template x 3 sites in production, internal JSON API architecture, no shared full site front end, unclear path to NEW, SPO integration.

The ultimate decision was to go with Node/Handlebars, the first version of which was running SI.com, GOLF.com and SIKids.com. With this solution we could tap into the depth of JavaScript talent in the team, while keeping the platform simple, performant and easy to onboard new engineers. WP was a close runner up.

Getting Buy-in

The hardest part of any project is getting started. In order to start we needed to get buy-in from all the brand product and engineering teams. Some of these teams had already spent most of 2016 migrating to a unified platform of their own only to be told that they would be doing that again 3 months into 2017. This was demoralizing for the teams that had poured so much effort into these platforms.

At the end of the day, this was about giving the business the best tools to compete in a very complicated and competitive market to give Time inc. a fighting chance. That had to come first.

Getting buy-in took almost a month of continuous discussions and negotiation with the brand teams. When we reached an impasse, having buy-in at the highest levels was critical. In our case we had Jen Wong, Nicholas Butterworth and Patty Hirsch (SVP, Digital) in our corner. There were a few occasions when we needed to bring in the “heavy hitters” to help move us forward on the business side.

Building a Team

No project can succeed without the right team to support it. In order to do this we needed to pull key resources from all over the organization to make this happen. There were 4 primary areas that needed to be tackled to make this happen:

Front End
This team would be responsible for tackling the templates and components, the front end application and the API definition. This was a small 4 man team lead by the mastermind behind the platform — Harry Hope. This wasn’t his first rodeo in this space. He was a lead in building the first version of the platform on SI.com. Read more in “Fixing the most terrible website on earth”.

Infrastructure
This team was responsible for setting up the environments and providing release tools (RT has since been open sourced) to make our release process seamless, simple and scalable. This was a small team comprised of DevOps veterans lead by Eric Saam. Eric’s ability to stay calm in the midst of this pressure cooker of a project would prove to be a huge asset.

Data
This was the Enterprise CMS team responsible for setting up the API’s, mapping and normalizing the CMS data and collaborating with Front End to define the schema. Lead by Matt Miritello, this team of 3 were responsible for providing both a Drupal Module and WP Plugin for the API, exposing the end-points via AWS API Gateway.

Testing & Tooling
The primary objective for this project was unification, however not impacting revenue and performance was a close second. The Testing and Tooling team, lead by Jason Nichols, provided tools to monitor, track and review site performance, tapping into WebPageTest, NewRelic and Google PageSpeed.

Scaling a Finite Team

To be successful, we needed to be inclusive of all the brand teams. Every brand has it’s personality, quirks and business arrangements. Learnings from decades of combined digital experience needed to be taken into consideration and applied so that all could benefit.

While the core team built out the framework and base platform, tailoring for each of the brand’s differences would require manpower and subject matter expertise.

Given this was a platform for everyone, having contributors from the onset of the project team was vital. We adopted an “inner-source” development model, which meant anyone in our organization could contribute, with a group of maintainers responsible for code review, quality control and merging. Harnessing the power of the engineering team at large proved to be key when taking each brand over the line.

More on this at another time, another story.

Ruthless Triage

Having such tight timelines created a need to be extremely ruthless when it came to what we considered MVP (Minimum Viable Product). Running up to each launch, the team would meet daily to triage blocker tickets. Blockers were determined as anything that impacted revenue or functionality. Even then, it would need to be substantial risk. Often the questions was asked,

“Are you willing to stop the launch of the site on the new platform for this ticket?”

This simple question helped crystallize the importance of each ticket and ended up removing hundreds of potential blockers from the critical path.

The Roll Out

To help reduce risk as we rolled out the platform, we phased each site release using traffic shaping to direct part of the total site traffic to the new platform. For the most part the process was as follows:

1% traffic to Element— we used this to confirm infrastructure stability and configuration. Make sure the house didn’t burn down. Typically we held here for about a day.

10% traffic to Element — we used this increment to confirm that analytics looked directionally good and that ads targeting was working correctly. We held here for about 7 days. Going shorter wouldn’t give us enough data, going longer would start to open us up to SEO risk.

25% traffic increments — depending on the scale of the site we used a number of stepped increments after 10% to pre-warm caches, glean more like for like data. For People.com we had many increments.

100% traffic — at this point all traffic was routed through Element.

Pages that did not have an equivalent template in Element were routed to the legacy system and rendered there until such time as they were replaced.

The Star of the Show

This project changed the digital business of Time inc. and impacted hundreds of people, from our edit staff to the Product & Engineering org., sales to ad ops. A project of this scale could easily have faltered and failed had it not been for the incredible collaboration across all the teams.

Everyone contributed.

Whether it was through coding or drawing up product requirements, working together to make this happen created amazing bonds of shared purpose, camaraderie and a mission.

The Results

At the end of any project it’s the results that define the success or failure. At the end we achieved the following:

  • Unification — 22 brands on a single front end platform.
  • Scalability — changes that used to take 3–5 weeks to roll out, took hours.
  • Performance — 25% reduction in avg. page load time.
  • Revenue — 18% increase in ad viewability.

At the beginning of 2018 over 300 million sessions were being served through Element every month.


The Team

Executive Sponsors: Jen Wong (COO), Nicholas Butterworth (SVP, Digital Product & Engineering)

Business Owner: Patty Hirsch (SVP, Digital)

Product Owner: Ben Ronne

Engineering Owner: Alex Charalambides

Project Manager: Juan Espada

Product Leads:
Krys Krycinski, Chris Hardtman, Subhabrat Padhi

Engineering Leads: 
Harry Hope, Eric Saam, Matt Miritello, Jason Nichols, Chris Murphy

UX/Design Leads:
Erik Frick, Mike Cox

Brand Product Leads:
Maura Charles, Christina Vermillion, Mike Phillips, Eric Soll, Aleks Mielczarek, Deven Persaud

Brand Engineering Leads:
Aadi Deshpande, Hans Gutknecht, Shameel Arafin, Nick Trefz, Justin Ferrara


This was a story I had started writing at Time inc. and would be remiss if I didn’t share it. I have since moved on and head up the Engineering team at Insider inc. where I get to apply so much of what I learned in this incredible process.