Cannibalizing the Monolith: A Micro-Application, Component-Driven Approach to Web Development

Part 1: Component-Driven Development

Zachary Maybury
DraftKings Engineering
4 min readSep 29, 2020

--

DraftKings Engineering’s approach to web development has changed dramatically since the launch of our first product, Daily Fantasy Sports, in 2012. Starting in 2015, the scale of our business grew exponentially. We needed to build new experiences faster, develop applications that were more reliable, and deliver value to the business more incrementally. While transitioning from a monolithic, MVC powered site to a highly-scalable, service oriented architecture may sound like a “no-brainer,” making that transition with a large, legacy codebase and zero downtime can feel like trying to replace the wings on an airplane while flying. To keep pace with our evolving backend architecture, we decided to take a similar approach to our web application development. We’ll walk through the low-risk, high-impact strategies we used to break free from our monolithic web server to our now highly-scalable solution.

Starting Small: Shipping A Single Component

After a few proof-of-concepts with React (component framework) and Redux (state management framework), the team was ready to take on our first production integration. After deciding to use client-side React, Redux, and XHR requests through our API gateway (data hydration), our web development team quickly went to work on making our vision to cannibalize our front-end monolith a reality. The team had to address two challenges:

  1. How to distribute the code to our customers
  2. How to integrate page-specific client components.

Distributing the Code

An overview of our cloud-native architecture for our JS code
Our cloud-native architecture for our new client-side React integration. We chose to leverage existing web application servers and internal services, rather than introduce new internal services, to keep our delivery agile and efficient. Note: Our applications and internal services are also split across Production, Staging, Development, and On-Demand environments. This has been omitted from the above diagram.

We needed a way to rapidly distribute our client code from our build system to our customers through our existing monolith. Our engineering team supports multiple environments for development, release staging, and production, so our solution had to fit into this existing structure. We chose to utilize an existing internal service to tackle this challenge. Our engineering team had previously built a universal settings service for the management of environment specific configurations, enabling our team to quickly move past this challenge. Empowered with this service, our path to integration became straightforward. We drew up our desired cloud-native architecture, utilizing existing MVC web application servers and internal services. In the end, the only new infrastructure the team had to spin up were environment specific S3 buckets to store our JS build artifacts.

The End-to-End sequence of a webpage request, utilizing a environment specific settings service and S3 for code distribution.
Site load flow: Our MVC server requests the environment specific setting value for our JS code from our internal settings service, injects this code into the page HTML, and returns it to the client. The browser loads the appropriate JS code from S3 and executes it, loading any page-specific components or content.

Once we had our cloud dependencies in place, the mechanics of shipping a new build of our code to production was fairly straightforward.

  1. New JS code is copied to S3. We content-hash each JS file to ensure we are able to set a long max-age cache-control header, reducing unnecessary network traffic on the client.
  2. Update an environment specific setting referencing the URL of the code we just copied to S3.
  3. Subsequent requests to our web servers receive the updated URL from our setting service, including the new JS URL as a script in our common html partial.
  4. Web clients fetch and execute the new code, pulling any new JS content from S3.
Build and Deploy Flows for new DraftKings JavaScript changes
Build and Deploy Flows for new DraftKings JavaScript changes

Page-Specific Client Component

We decided to take on integrating a simple component to start, the real-world competition scoreboard in our Game Center.

Photo of the DraftKings GameCenter, highlighting the scoreboard component
Our Daily Fantasy Sports GameCenter for a multi-player NFL contest. The highlighted portion of the page is our real-world competition scoreboard, the first React component integration on our website.

This component was a logical first choice as it is used for displaying non-critical information only, had a simple backing API call that we could leverage, and needed to respond to scoring updates pushed to the client from our backend. Driven by our previous architecture decisions, integrating our new component became very straightforward.

  1. Determine what page the client code is executing on.
  2. If the user is on the Game Center page, load the JS code required for creating our Scoreboard Component.
  3. Fetch the data required for rendering the component from our API gateway.
  4. Render our new component over the legacy scoreboard <div>, allowing React to handle the DOM updates.

To keep the component up to date after page load, we wrote a bit of code to subscribe to our competition updates Pusher channel. This allows React and Redux to do what they do best, handle the updated state and render any required UI modifications. In just a few straightforward steps, we had shipped our first React component to production! We quickly followed this success with many additional component integrations across all of our site pages.

Through the expansion of our new React codebase, we had taken our first steps towards dismantling our monolith. Our web component integrations allowed us to deliver more testable, iterative features to our customers, free from the arduous release process of the shared monolith. However, the delivery of our code was still strongly tied to shared web application servers. A truly compartmentalized solution would allow our applications to continue to function, even in the event of a partial or full MVC outage. As our Web team shifted its focus toward a new initiative, we set out to create this separation.

Going farther: Building a Standalone Application

Check out Part II of this series, where we will show how DraftKings Engineering has built upon these principles to build Standalone Micro-Applications for our Payments flows.

--

--