Migrating Large Enterprise to NodeJS

NodeJS has taken the Open Source community by storm and it’s rapidly becoming the most popular development platform. With proven track records in enterprise environments, more and more companies are adopting it. At @WalmartLabs we were a forefront user of NodeJS, with early adoption efforts such as the development of hapi. There is always a flourishing NodeJS community here. With Electrode, it significantly accelerated the use of NodeJS within @WalmartLabs and is now powering most of our eCommerce site http://www.walmart.com. In this blog, I will go over some of the key things that helped drive Electrode’s adoption.

Before you read further, note that some views are opinionated. This a high level overview on what we consider important to adopting NodeJS, but there are definitely other approaches to realizing that goal.

Get Executive Support

To develop a general platform that’s going to transform a large portion of a company’s development community, buy-in from the executives up front is essential. Getting the support may not be straightforward. You can only show the success and advantages of NodeJS to convince your executives. They have to be sold on it first and offer support willingly. We are lucky that our executives were generally open to NodeJS already, partly due to the success usage of NodeJS in various individual projects.

Offer Full Solutions

A big advantage of NodeJS is that it’s very easy to develop an application, especially with all the Open Source resources available. Some @WalmartLabs teams developed their app successfully using NodeJS, but they have to handle everything in their own way including CI/CD setup, deployment, monitoring, and integration with existing infrastructure. A platform that solves these would be helpful for any team looking to use NodeJS for their application.

During my first few weeks at @WalmartLabs, before I wrote a single line of JavaScript code, I spent some time learning the in-house OneOps cloud infra and the Jenkins build system. Then I devised a general and reusable approach to make building and deploying NodeJS apps easy and seamless. I wrote about some of the technical details in my other blog here.

Ultimately we ensured that any team who use the Electrode platform would have a solution in every step of their application development cycle. These included:

  1. getting started and training support
  2. continuous development and integration setup
  3. acquiring cloud instances and deploy applications to them
  4. automated functional end to end integration and regression testing
  5. production stability and performance monitoring support
Offering all the pieces is important.

Offer Repeatable Deployment

It’s important that teams can build immutable releases of their application so they could deploy any version at any time. The general standard practice at @WalmartLabs was to publish an application with its node_modules to the internal npm registry. We had to develop a new approach because npm is not meant for such usage.

It’s extremely important that applications are packaged into binary immutable artifacts, with even the version of NodeJS runtime locked in. So what we ended up doing was to create zip artifact of the application with settings manifest built-in, and push it to our internal Nexus repository.

Offer Onboarding Support

We have developers with various level of experience with JavaScript and NodeJS. To help all teams to get onboard, we dedicated a large chunk of resource to developing training material and live support through slack channels.

Support Production Monitoring

Each team must have real time monitoring on their application’s health running in a production environment in order to react to issues. They also need to be able to collect metrics over time to see trends. The @WalmartLabs OneOps cloud already has a lot of built-in support for application health monitoring and healing. To add additional support, we spend a great deal of time implementing instrumentation and APM libraries that integrate with our various backend infra such as in house Kafka instances and services from Splunk.

Offer 24/7 realtime and comprehensive monitoring of app health and performance.

Standardize Integration

To run an eCommerce site like http://www.walmart.com, there are many very complex systems behind the scene. @WalmartLabs has an internal uService architecture. The services are loosely implemented as REST like http endpoints. With support like instrumentation tracking and service to service authorization, there’re some setup work when invoking these services. Existing NodeJS code all reinvent that wheel each time.

Our decision was to standardize all service clients using the Swagger spec. Due to some non-standard things in our service endpoints, we actually had to customize the implementation slightly. The result is a collaborative effort with contributions from every team that created shared clients for all the services we use.

Standardizing allow all teams to come together to build and share modules.

Have a Plan for Publishing Modules

For any large scale in house adoption of NodeJS, being able to publish your private modules is essential. You can run your own internal registry or use npm’s private module service. @WalmartLabs already has an in house npm enterprise instance. However, we had issue with the DB storage, so another team came in and built a new one on top of the existing Nexus repository. If you want to avoid this, then npm’s private module service is the way to go.

Audit the npm Modules You Use

NodeJS’ modules ecosystem is phenomenal. There are almost 500K modules in npm registry now. You can find a module for almost anything you want to do. However, there are modules that may have critical bugs or malicious hacking intends. So going with the well known and popular modules is generally a good idea. If you want to use a module with little uses, then exercise extra caution and go through its code to make sure it does what you think it does.

Lock Dependencies

The standard practice in NodeJS with npm is to use semver to pick up new minor or patch versions of a dependency. However, that also opens your app to picking up changes that you may not expected in your next npm install. npm has a shrink-wrap feature. npm@5 and yarn have a lock feature. Make use of these to help maintain a more consistent modules installation for your app and retain better control of when to update your dependencies. Commit your lock file for each release. It will be essential to debug which dependency broke your app when you did an update.

Support Flexible and Dynamic Configuration

To support dozens of different apps in a complex enterprise environment, flexible and dynamic configuration is important. A NodeJS application development cycle typically goes through the local, dev, and staging environments first before going to production. Each of these would have different supports that need to be configured differently. While hapi has the confidence module, we chose node-config.

However, composing config base on only NODE_ENV is not enough, because when running on our dev and staging clouds, it should always be production. Our apps have to be able to choose config base on the cloud env, not NODE_ENV. Further, we have our internal dynamic central config management that needed to be integrated with the apps.

To meet these needs, a new NodeJS app config management module similar to node-config called electrode-confippet was implemented. Then confippet’s extensibility support was used to implement another module that compose configs specific to our cloud setup.

Give your user the flexibility to configure their app.

Consider Philosophical Ideals

One of the blessings of the NodeJS ecosystem is that all the sources are available. With the nature of JavaScript, that opens the door to the monkey patch phenomenon. Generally monkey patch is a bad idea, but are you willing to shun it at any cost, including completely blocked with your project?

One issue I ran into was that the Java based uService platform code insist that our internal http headers be all uppercases. NodeJS automatically convert all http headers to lowercase. The Java team was in the middle of developing a new release and didn’t want to make change to this. To move the project forward, NodeJS core was monkey patched to keep our internal http headers in all uppercases. I wrote about that in this blog here.

Monkey patch is an extreme example, but everyone have their own opinions and ideals. Sometimes to work around blockers, you might have to put certain ones aside.

Use Promise

Since the early days Promise has always been a contentious topic. There’s the camp that’s firmly against it and use nothing else but only callbacks. In fact, it’s been called a NodeJS anti-pattern by some.

Don’t release the Zalgo monster. You can avoid that with Promise. You can attach any async action into the Promise chain with a simple and standard interface. In a large application with a ton of business logics, Promise makes it very easy to keep the flow and error handling in check. I have many bad experience with untangling the endless web of mazes of handling complex business logics using callbacks.

Given the traction Promise has been picking up, with tools like babel and the introduction of async/await in Node 8, any serious effort at migrating to NodeJS should really use Promise as much as possible. It’s transformational to be able to write linearly looking async JavaScript code with async/await.

Like legos, Promise offers a simple way to connect all your async pieces.

Find a Gradual Onboarding Path

Before we could convince any team to commit to the new platform, we have to prove its viability first. To do that, we picked a low risk business app and assembled a team of senior developers and started developing a new app. This team also helped with beta testing and giving valuable feedbacks for us to fix issues. The success of this app gave other teams the confidence they needed to start onboarding with their apps.

Follow NodeJS Best Practices

NodeJS offers a different approach to running servers in production and there are practices that others have found to help running it smoothly. For example, it’s common practice to run nginx as a http proxy in front of your Node server. It’s also important to have robust error handling in your code and your deployment process. Definitely search for node js production best practices and read some of the articles it finds.

Adopt Standards but Embrace Proprietaries

As a closing thought, note that a lot of our effort has been making NodeJS work with existing proprietary technologies. For example, there were talks on a containerized approach when we started with OneOps. In the end, we opted to work with what’s already there. However, when it makes sense, adopting established standards is also important. So when we had an opportunity to use Swagger to standardize the consumption of services in NodeJS land, we went for it.


This is a fairly high level overview of what we considered to be important technical items to migrating an Enterprise to NodeJS. Some of the logistic items were covered much better in Alex Grigoryan’s talk given at 2016 Node Summit. There are many other small things and a lot of details to each one. I plan to write more about some of them in the future.