The Deep Dive — Migration From Angular to React

A primer on converting AngularJS applications to ReactJS.

Pascal Maniraho
Simple
51 min readOct 2, 2018

--

Intro

This deep dive is designed to help you to get to top speed and be successful with large scale legacy code migration.

Techniques discussed in this primer address some of the challenges related to planning and executing a migration of a large-scale AngularJS application to React.

This document stresses on re-using most of the legacy code. It worth to mention that — the more “framework agnostic” your legacy application turns out to be, the easier the transition.

Deep knowledge of both frameworks is a must. If you are not familiar with React — I suggest you read this deep dive into react world. It is enough to have an idea of most of the concepts needed to do the job.

Let’s dive into it — happy coding

Is this writeup for me? 😳

Probably not 🤷! Unless:

  • You have been tasked to migrate a large scale legacy AngularJS app to React and you frankly don’t know, neither how nor where to start.
  • The shire size of the “not-well-maintained-legacy-code” intimidates You— or worse, “bullies” You
  • You want to move fast on a tight budget — time and resource — and want to make progress regardless.
  • You are in crossroad to make a big decision: keep increasing the tech debt or start the migration before it is too late.
  • You are not sure which path to take between the big re-write or continuous improvements(using refactoring) — spoiler alert: there is a limitation!
  • You are ready for moving on from AngularJS but are not quite sure if React fits for large scale applications — especially enterprise applications.

Keep reading if one or more of the above points rings a bell.

Nevertheless, techniques described below go well beyond migration use case. So you may find some discussions interesting, especially on: — testing react components, using inversion of control with react, componentization of your legacy app and accompanying suggested reading lists.

This article is under constant editing. More materials will be added along the way — make sure you check out regularly to see what is new.

Don’t forget to share with your friends on twitter/LinkedIn or Facebook, if clapping 👏👏is your thing please do, I will forever be grateful.

Introspection

First and foremost let’s do a quick introspection on the state of affairs with our legacy application.

You may skip to the strategy instead if you find this irrelevant to your use case.

In the real world — the time to pay off technical debt is scarce — in most of the time fueled by the fear of the unknown. The management loves to milk the cow but not to change the litter. The developers on another hand avoid modernizing legacy code — to avoid trouble in case anything breaks.

The fear of unknown

  • Big re-writes have track records to be business disasters in a fairly good number of instances. This fear awakens the management’s “Reptilian Brain” and pushes them to the extremes of survival mode — “If it works don’t touch it ” aka “if it ain’t broke, don’t fix it” — especially if they are from Joel’s School of Thought when it comes to big re-writes. 📝
  • Customers care more about the value our application adds to their lives than the programming language or framework the application is built with. Visible Technical Debt such as bugs and missing features and poor performance takes precedence over Hidden Technical Debt such as poor test code coverage, modularity or removing dead code — 🐛
  • The fear of another framework overhaul may be a “nail in the coffin”. AngularJS production battle-hardened-application turned legacy overnight from day one of Angular release. Adding new features or fixing bugs on a legacy stack became nothing but a liability in the long run. If that turns out to be the case — Why converting to React at all?

These fears depict valid concerns — the code migration is also the management of the unknown.

The choice of a good, stable framework — is cornerstone to preemptively solve system related bugs, delivering on new features or sunset-ting others, solving performance and security related issues.

A good code migration strategy should include plans on how user on-boarding(account migration) will work once the upgrade is completed.

The term Angular will be used for convenience, and has nothing to do with actual Angular version. In most of the case Angular version being referred to is the legacy one(AngularJS or Angular 1.x)

Reading list

Strategy

At this point, We know where we are heading, but we do not really grasp the full breadths of the problem at hand. We need at least a strategy to attack our problem and design a solution that is more practical, manageable and less scary.

I have to state that migration is better done “bottom-up”, rather than “top-down” — even though that will be your decision to make, I will explain why later on.

The obvious solution to migrate to a new framework is to stop everything and start over: also known as big-bang. In another hand, progressive upgrade(migration) is more user-friendly than a complete re-write(stop everything and start over).

  • The dichotomy between MVC and Flux patterns is central to this effort. The idea is to rethink the data flow strategy, to adopt one-way data flow with lest code change.
  • Thinking the application as a tree, the root will be at the router(or App Shell), and leaves will be composed of the last elements such as directives, validation code, etc.
  • Bottom-up migration strategy favors starting from leaves all the way up to the root. That is converting directives/pipes/validations converting first.
  • Top-bottom strategy favors starting with the root(app-shell) and going all the way to roots. This can be considered as a bing-bang conversion.
  • To be more realistic though, it is possible to combine Top/Bottom and Bottom/Up by creating an Opt-in strategy: The new router decides which application to run.
  • App Shell of the new dashboard will be up and running on a separate route say: app-url/r/.Old route hosts deprecated code, new route hosts next-gen code. Separating new version from legacy saves on JavaScript size as only one version of the application is served for every request.
  • Once all is OK with the new dashboard, the Landing page Migration can kick in, marking the completion of the conversion/migration.
  • To move fast, entities involving listing will be done first. Forms or Editing content will follow suit.
  • Using the classic refactoring techniques. That includes testing before and after changing anything code.
  • Most framework specific functions will be replaced. Quick examples: $http will be replaced by axios, $q will be replaced by JavaScript native Promise API, $timeout/$interval will be replaced by timeout/interval as the $scope is not needed, the same goes to $window and $document. $filter will be replaced with JavaScript native functions. angular.[element|repeat|extend|merge] have luckily native functions that can replace them.
  • The most tedious task though is moving ng-form and $formatters $validators $parsers to their new equivalents. The plan is to move all listing related features and concentrate on forms later.
  • The landing page and the app share the same codebase. Breaking down this into two smaller manageable fast to migrate may help.
  • Image uploading, Drag-and-Drop, Image.
  • How can you start keeping in mind that some legacy libraries(Google Analytics, Authentication, etc.) may not even have equivalent in canary(or next) codebase, or not compatible at all?
  • Let’s say we figured all out, and have a ready to use brandy new app. At this point, we still have to figure out how to onboarding existing customers to the new platform. Does that have to be big-bang onboarding or progressive onboarding?

Road-map

The strategy exposed in the previous paragraph can be translated in the following steps:

  • Workflow ~ revise the workflow in a way that legacy code lives side by side with the new code. The workflow includes test -> build -> code, rinse and repeat.
  • TDD ~ Practically speaking, migration is comparable to another refactoring exercise — to some extent. The problem with React and Angular is that both frameworks also have their ideal test runners. It is imperative to choose which test runner to stick to, ideally using existing tests to save time.
  • Libraries ~ Both old(AngularJS in this case) and New(React) have to be embedded in the new application.
  • DI/IoC ~ How to deal with React’s DI/IoC
  • Router ~ Since new application(React) and Legacy(Angular) will run in tandem, it is better to understand and implement.
  • Directives ~ Angular has two types of directives from a provider point of view, custom directives(Entity, Attribute and Class) and built-in directives(all starting with an “ng-”).
  • Attributes ~ Since the notion of “attribute directive” doesn’t exist in React, the functionality of such modules are transferred to other layers. CSS classes are used to replace directives such as ngIf ngClass ngHide ngShow ngSwitch.
  • Components ~ Components are building blocks of modern applications. In earlier days of AngularJS, Directives played this role, in addition, to be a third-party integration entry point.
  • Events ~ Events bridges communication between components(elements) in JavaScript. React takes a more JavaScript way to handle most events, Whereas Angular 1.x introduced new types of events along with the $scope.
  • State ~ The model expresses the state across the application. React introduces a different approach to state(the model) and props(HTML related model-ish).
  • Forms ~ Changing state is most of the time done from inside Forms. The model allows passing around states. To notify other sub-systems about a change, Events are used.

Questions

Some of the questions answered in this primer include but not limited to:

  • How to Connect React to WebSocket, using Socket.io. ~ Combining react with SocketIO for real-time goodness
  • How does the event handling works, and how to integrate with third-party libraries?
  • How to secure React beyond sharing session with legacy code, but generating its own authentication token.
  • How to work with RESTful endpoints, and how if possible, to add a service layer to the frontend.
  • JAMstack vs Isomorphic, which approach to take? There are tools to help with pre-rendering including caching when the options taken is JAMStack.
  • What is the difference between these two approaches? How to deal with SEO or can Bots render a JS-only website? Also look into renderToStaticMarkup() alternatives.

Workflow

In case you are wondering what a workflow is — this refresher can help you out. You can always skip to next chapter if you figured this out already — at the end, there are some notes on migrating gulp from version 3 to version 4.

The workflow dictates automated actions your task runner will take as you edit your source code.

In addition to existing code workflow, there are going to be additional actions to take when React code is modified. Moreover, not all legacy code is ES6 compliant — let alone JSX.

In most cases, AngularJS used Gulp task runner. We will keep the same, but adjust to reflect changes in our working environment.

Existing workflow, when code changes go as following:

  • When CSS/JS/HTML code is modified — gulp(or grunt) lints the code, rebuilds the changed portion, hot-reloads or hot-replace (HMR) to reflect the modification in testing browser instances.
  • The build may be accompanied by executing Unit Tests

These steps are going to be similar whenever React is added in the mix.

React the gulp-way

One of many other good reasons to work with JavaScript these days is the ability to automate most of the workflow with just npm.

Adding the following lines in your package.json makes it possible to run the following command 1) npm run build — to run one time builds, good when you are in CI environment 2) npm run watch — to execute the build every-time there is a change in your JavaScript files 3) npm run test — to work in TDD mode.

Show me the code add following lines in scripts section of package.json

"scripts": {"build": "browserify -d -t [ babelify --presets [ @babel/preset-env  @babel/preset-react ] ] index.js | exorcist  dist/bundle.map.js > dist/bundle.min.js","watch": "watchify index.js  --debug --verbose -o 'exorcist dist/bundle.map.js > dist/bundle.min.js'","test": "gulp tdd"
}

There is something you have to do before you try those commands on your terminal though — installing dependencies from npm registry

npm install --save-dev 
@babel/core @babel/preset-env @babel/preset-react
babelify browserify watchify exorcist

There is a set of issues to expect while using gulp or npm with not only react but also with ES6 or ES7.

  • DevTools provides a way to read un-minified source code — but that is possible if you provide associated source maps.
  • The source-map program, DevTools in our case, has to be able to read all mappings before concatenation and minification process.
  • Browserify doesn’t allow the source-map program to map anything else, but the index.js file — or whatever the bundled name you will be providing.
  • The major setback becomes how to enable source maps when using browserify — or rollup. For those who are using Webpack — this set of problems is already baked in the program.

I always like to start small and move to complex parts later. In that spirit, this small dummy program can help as a test bed — the syntax is compatible ES6.

//counter.js
export default function counter(count){
return (Math.random() * 10);
}
//index.js
import counter from "./counter.js";
class Main {
constructor(){
this.count = counter(10);
this.show = this.show.bind(this);
}
show(){
console.log(`${this.constructor.name}- counter ${this.counter}`);
}
}
new Main().show();

This is enough to pass the smell test on build tasks provided in 1) and 2). In case all is good to go, the builds will be available at /path/to/project/dist . I will assume the counter.js and index.js are located at the top level, meaning /path/to/project.

Build tasks

The build task is key to making front end apps. To boost productivity, and avoid manual reloads, the task runner executes in watch mode: ~ any file change triggers an incremental re-build.

// add custom browserify options here
var customOpts = {
entries: ['./path/to/index.js'],
fast: true,
cache: {},
debug: true,
basedir: __dirname,
detectGlobals: true,
packageCache: {}
};
var opts = assign({}, watchify.args, customOpts);
var bundler = watchify(browserify(opts));
// add transformations here
bundler.transform(babelify.configure({
presets: ["@babel/preset-env", "@babel/preset-react"]
}));
bundler.on('update', bundle);
bundler.on('log', log.info);

function bundle() {
return bundler.bundle()
.on('error', console.log.bind(console))
.pipe(source('bundle.min.js'))
.pipe(buffer())
.pipe(debug({title: 'watch:jsx:min processed'}))
.pipe(sourcemaps.init({loadMaps: true}))
.pipe(sourcemaps.write('./'))
.pipe(gulp.dest('./dist/r/'));
}

The following section makes sure that the bundler re-runs the task every time it detects updates in watched files. The second line obviously displays log information on the console, to keep the developer in the loop.

bundler.on('update', bundle); 
bundler.on('log', log.info);

The following line if optional — you may remove if you don’t need to buffer file content, especially if you have enough RAM. But it is essential for those running on relatively smaller machines.

.pipe(buffer())

Likewise, in case you are making production-ready builds, and not willing to reveal your source code to the world, these lines may be irrelevant to your setup. The first line initializes source maps from the browserify stream. The second line pushes the source-maps at the destination — which will be later determined to be ./dist/r. The end result will be written into a .map file.

.pipe(sourcemaps.init({loadMaps: true})) 
.pipe(sourcemaps.write('./'))

There is a catch though, most of the time the editor writes incomplete code blocks. Regardless, the build kicks in and when the syntax is not correct, the compiler stops the execution. The question is how to make sure that wrong syntax doesn’t force us to restart the task every time it breaks.

The key is to lint and catches errors and exceptions before they got fed into the bundler process. That is where one of the following lines plays a crucial role:

bundler.bundle().on('error', console.log.bind(console))
bundler.bundle().on('error', err => { log(err); this.emit('end')})

There is a whole QA that answered that process a bit more on this link. To glue the whole task to existing gulp tasks the bundle() function can be added — which makes it possible to watch.

gulp.task('react:watch', bundle);

The previous gulp task can be equivalent to this one-liner — that you can plug into package.json’s script section

watchify 
index.js
--debug
--verbose
-o 'exorcist dist/bundle.map.js > dist/bundle.min.js'

A quick tip before I close this chapter: It is possible to run that gulp task via npm CLI command. For that, and in other to run a command similar to npm run react:watch or npm run build:watch via CLI, the scripts section of the package.json at the top of your project will have some time that looks as follows:

"scripts":{
"react:watch" : "gulp react:watch",
"build:watch" : "gulp react:watch"
}

Giving credit to where it is due, the following links that helped to prepare this section on gulp with react are:

Browserify Handbookhandling source-maps, exorcist npm page, babelify npm page, watchify npm page, browserify builds with watchify documentation page and this gist from Michal Ochman on adding source-maps of a react project in a gulp task.

To keep things simple and less overwhelming, the Migration from Gulp 3 to Gulp 4 will not be discussed in depth at this point —Enhancing live-reload code will be added to help save time while adding more code

Why — considering a migration from Gulp 3 to Gulp 4

  • Gulp version 4 is becoming more and more adopted with the community. It is an under-estimation to think webpack is replacing gulp — in a matter of fact you can have both — if you like stream Node API — you will love gulp.
  • Some of the latest npm modules started to break already — since they depend on gulp v.4
  • Gulp 4 makes it possible to run tasks in parallel — or in series. Legacy version supports serial task execution only.
  • A need to update the reload without relying on a plugin. This way, few setups are required to run on any developers computer, without having to install more software. It also makes it test with browsers that do not have livereload plugins yet.

As a refresher — the workflow is — whenever code changes, a chain of actions follows. In case the change was made on a JavaScript file, the following events may occur in order — linting/hinting the code for style conformity > restyling > running tests > rebuild + minification> hot reloading

These events can take time to complete, especially on slower computers, or larger projects. Parallel execution promise better performance, which is not a bad thing to have

Reading list:

Problem

  • Not fully understands how livereload works, so that I can be able to enhance reload tasks, without hitting refresh buttons.
  • Unsure which tasks are going to break, nor how long it takes to transform old tasks into newer tasks

Difference

What is the difference between LiveReload vs BrowserSync

  • BrowserSync — allows multiple devices to access the local browser on the same wifi. When there is a change in frontend code, using a stream or manual strategy, BrowserSync pushes changes to all browser instances connected to the frontend server — making it possible to test multiple screens at the same time. It is easy to think of this as a mirroring. It is possible to reload the whole browser as well — more on this here
  • LiveReload makes it possible to update parts of a SPA upon CSS/HTML change. To make sure the state is preserved, one step further is required: Hot Module Reloading also known as HMR.

Reading list

Additional Dependencies

npm install — save-dev express connect-livereload serve-static

Current Status — For simplicity reasons, the following describes the state of the application without livereload enabled.

var gulp = require('gulp'),
http = require('http'),
st = require('st');
var lrconf = {
port: 8282,
portt: 82,
start: true,
quiet: false,
basePath: __dirname
};
gulp.task('serve', function(done) {
http.createServer(st({path: __dirname, index: 'index.html'})).listen(lrconf.portt, done);
});

PS — Adding livereload: path.join(__dirname, '/node_modules/livereload-js/dist/livereload.js') to the lrconf the configuration makes it possible to override the location of livereload.js

Modifications to make

The first step is all about adding express and enabling using st as a middleware — The redirection will neither be based on browser plugin nor manual install. The middleware will handle hot-reloading code injection for us

var express = require('express'), 
app = express(),
st = require('st');
gulp.task('serve', function(done) {
http.createServer(app).listen(lrconf.portt, done);
});

After that, attaching middleware to express application should be easy. The order in which livereload middleware is added is important.

app.use(require('connect-livereload')({port: lrconf.port}));
app.use(st({path: __dirname, index: 'index.html', cache: false}));
app.use(express.static(__dirname));
app.use(express.static('assets'));

The default task initiates livereload and maybe even possible to force a reload or notify about a particular file change

gulp.task('default', ['serve'], function() {
livereload.listen(lrconf);
//Other gulp.watch goes here ...
gulp.watch('./build/**', function (file) {
var relPath = ['build\\',
path.relative('./build',
file.path)].join('');
livereload.changed(file.path);
});
});

The watch inside the default task can be just delegated into individual tasks. For example, after compiling CSS, pushing those changes into the application session would look as following

gulp.task('css', function () {    
var minified = gulp.src(['./css/*.min.css']);
var unminified = gulp.src(['./css/bootstrap.css', '/css/main.css'])
.pipe(concatcss('main.css'));
return es.merge(series(minified, unminified))
.pipe(concatcss('hooggy.min.css'))
.pipe(minifycss({keepBreaks: false, advanced: false}))
.pipe(gulp.dest('./dist/css'))
.pipe(livereload({port: lrconf.port}));
});

Reading list

Testing React Components

The testing in this section will be more on Unit Testing, than e2e testing. Well tested applications tend to have larger testing codebase. Scrapping all the testing code would be painful if not waste of time.

Luckily some of existing tools can still test some parts of react application as well. Enzyme provides utilities to test react components.

Enzyme makes it easy to load components in test cases. The problem however is

  • How do you simulate interactions — such as clicks, taps or uploading files
  • How to dospies will look and work in this new framework?
const wrapper = shallow(<MyComponent,{ context });
const instance = wrapper.dive().instance();

//OR if using mount()
const wrapper = mount(<MyComponent,{ context },);
const instance = wrapper.instance();

//Simulate a click <==
expect(instance.somePrivateMethod()).toEqual(true);

//expecting on a state
expect(wrapper.state('counter')).toBe(0);
wrapper.find('button').simulate('click');
expect(wrapper.state('counter')).toBe(2);

//Do the spying here
jest.spyOn(instance, 'incrementCounter');

//Finding Contained Components --- after using the mount thing
expect(wrapper.find(Avatar)).to.have.length(1);
expect(wrapper.find(Avatar).length).to.equal(1);

Reading list

  • To understand more about testing react apps, I suggest you read: [1], [2], [3], [4], [5]
  • On testing components [1]

Additional materials for testing react components can be found in all these documentations: How to unit test a method of react component? Where did the Spy-ing thing came from? Testing React Components. How to test react components using jest and enzyme? Unit Testing Behavior of React Components with Test Driven Development. Testing React Components with Jest and Enzyme. React Testing Tutorial: Test Frameworks & Component Tests — Just to name a few.

Migrating the test cases

Jasmine matcher has been used in all of the previous tests, but that is not a problem. It is possible to just use Jasmine Matchers in our project using this npm library. Then in all tests include import expect from 'jasmine-expect'

To have the expectation loaded, consider npm install --save-dev jasmine jasmine-expect. Obviously the beforeEach/afterEach constructs do not change for both Jasmine and Mocha. However, the following Jasmine constructs can well be replaced with their Mocha counterparts:

beforeAll(() => {}) 
afterAll( {} => {})

before(() => {})
after(() => {})

Using Sinon’s Fake Server can be used to replace $httpBackend

let payload = '[{ "id": 1, "name": "Gwen" },  { "id": 2, "name": "John" }]';
let server = sinon.fakeServer.create();
server.respondWith("GET", "/users", [200,{"Content-Type": "application/json" },payload]);

...
server.respond();
server.restore();
...

Code sample from David Tang’s The JS Guy Blog and the SinonJS documentation.

The quick question is how this can be put in libraries to allow migration of following $httpBackend utilities

Testing the Asynchronous, or flushing micro-tasks can be found in these two links:

export function MockContactApi($httpBackend) { var capi = 'http://dev/api/contact'; 
var cpapi = 'http://dev/api/contact/:id';
var contact = {email: 'email@email.com'};
$httpBackend
.whenRoute('POST', capi)
.respond((method, url, data, headers, params) => [200, [contact], headers]);

$httpBackend
.whenRoute('PUT', cpapi)
.respond((method, url, data, headers, params) => [200, [contact], headers]);

$httpBackend
.whenRoute('DELETE', cpapi)
.respond((method, url, data, headers, params) => [200, [contact], headers]);
return $httpBackend;
}

There are a couple of things to notice in the above library. The function attaches functionality to an instance of $httpBackend, initialized outside the test itself. Initialization and killing/cleaning the $httpBackend instance is managed by the caller.

//Configuration or top level initialization
let header = {contentType: { "Content-Type": "application/json" }};
let response = [{ "id": 12, "comment": "Hey there" }];

//In mocking library
export function MockContactApi(server){
let hc = header.contentType;
server.respondWith("GET", "/api/contact",[200, hc,response]);
server.respondWith("POST", "/api/contact",[200, hc,response]);
server.respondWith("DELETE", "/api/contact",[200, hc,response]);
return server;
}
//The way to use this somewhere else:
import MockContactApi from "fixtures"
import sinon from "sinon"

before(() => {
//respond Immediately trigger synchronous response
//=> no need to call server.respond()
let config = {
respondImmediately: true,
autoRespond: true,
autoRespondAfter: 2
};
let server = sinon.createFakeServer(config)
MockContactApi(server);

})
after(() => { server.restore() })
//Do your tests as usual
  • Additional change comes in while dealing with Assertions/Expectation libraries
  • Since most of AngularJS rely on Jasmine’s expect libraries, and since it is not easy to re-use the same libraries with Mocha, the following tips may help with the transition
  • expect().toBeDefined() becomes expect().to.exist or should.exist(obj)
  • The custom function checker expect().toBeFunction() becomes expect().to.be.a('function') or assert().isFunction()
  • expect().toBe() becomes expect().to.equal()
  • expect().toThrow() becomes expect().to.throw()
  • expect(spy).toHaveBeenCalledOnce() becomes sinon.assert.called(spy)
  • expect().toHaveBeenCalled() becomes expect(spy).called
  • In two previous instances, the expect is loaded as following from Chai library

The spying and stubbing library relies on utilities provided by Sinon library. There will be a couple of tweaks to make the whole thing work as expected.

Let’s start by defining a spy, a stub and a mock. This will help us navigate various implementations, around the corner.

  • spyOn(obj, 'fn').and.callThrough() becomes sinon.spy(obj, 'fn')
  • spyOn(obj, 'fn').and.callFake(() => {return false }) becomes sinon.stub(obj, 'fn', () => {}). The new API looks like sinon.stub(obj, 'fn').callsFake(() => {})
  • spyOn(UserService, 'getUserId').and.returnValue(1) becomes spyOn(UserService, 'getUserId').returns(1)
  • jasmine.createSpy('next').and.callFake(() => {return false }) becomes var next = sinon.fake.yields(() => {}) OR sinon.fake(fn) jasmine.createSpy('on').and.callFake(() => {}) may also become sinon.stub(obj, 'fn').callsFake(() => {})
  • Or simply var next = sinon.fake();

There is a change in Sinon API that may result in the following error:

TypeError: stub(obj, 'meth', fn) has been removed, see documentation

To correct this error, all sinon.stub(obj, 'fn', () => {}) becomes instead sinon.stub(obj, 'fn').callsFake(() => {}).For a more complex function call such as:

spyOn(SomeService, 'fetchItem').and.callFake((params) => {
this.deferred = this.$q.defer();
this.deferred.resolve({
data: FakeData
});
return this.deferred.promise;
});

//Can be replaced with
sinon.replace(SomeService, 'fetchItem', (params) => {
this.deferred = this.$q.defer();
this.deferred.resolve({
data: FakeData
});
return this.deferred.promise;
sinon.fake()
});
//In case the Params is indeed a Function
var callback = sinon.fake();
sinon.replace(SomeService, 'fetchItem', (callback) => {
return callback.apply(this, arguments);
});

The special case where there is a clear need to have a callback instead of response looks as follows:

this.server.respondWith('GET', 
"/route/to/server?start_date=2012-1-1",
(xhr) => {
xhr.respond(200, { "Content-Type": "application/json" },'{}');
});

The problem with the previous code is that Sinon Fake Server will get confused with ?. The following code approach remedy the issue:

let url = "/route/to/server?start_date=2012-1-1"
this.server.respondWith('GET',
url.replace('?', '\\?'),
xhr => xhr.respond( 200, { "Content-Type": "application/json",
'{}');
});

Both the code and the fix are from this bug reported on Github.

The previous callback function gives the flexibility to work with $httpBackend that looked as follows:

$httpBackend.whenRoute('GET', 'http://dev/api/item/:owner')
.respond(function (method, url, data, headers, params) {
return [200, ApiItem, headers];
});
//In mocking library
export function MockItemApi(server){
server.respondWith("GET",
"api/item/:owner",
xhr =>[200, header.contentType, `${ApiItem}`]);

return server;
}

The are problems with fakeServer is it sometimes fails to register third-party libraries such as axios. The cure to this problem is two-fold: 1) It is possible to mock axios methods, which will make it hard to mock server responses. 2) Mock any HTTP request using libraries such as nock, Which may be useful since you can re-use most of the server side mock data.

More on this can be read on the following links 1, 2, 3

A Good alternative seems to be using nock. The reason being to be able to intercept HTTP based requests, Instead of mocking one by one argument.

Therefore — the following approach:

export function MockMeApi(server) { let hc = header.contentType;server.respondWith("GET", 'api/me', [200, hc,`${UserData}`]);
server.respondWith("PUT", 'api/me', [200, hc,`${UserData}`]);
return server;
};

Becomes — the following approach:

//The server instance 
let server = nock('http://localhost')
let payload = {status: 'OK', exact: UserData};
export function MockMeApi(server) {
server.get('api/me').reply(200, payload);
server.put('api/me').reply(200, payload);
return server;
};

Always remember to add this library at the top of your test files — chai provides a set of assertions, but any other assertion library can work really well.

import chai from 'chai'
const expect = chai.expect;

Issues relating to “Not implemented: window.open”

  • The main issue in this discussion is there are some API not available while testing with JSDOM environment.
  • When there is no API replacement for an API, an error similar to “Not implemented: window.open”
  • The main reason this happens is, JSDOM has not replaced and stimulate browser behaviors corresponding to those APIs.

Which functions are affected? Some are the following: window.scrollTo, window.alert, window.location, window.resizeToSo how to fix those issues, we have to manually add those items. There are two ways to fix this issue, the first being using a third party solution such as webextensions-jsdom, and the second being extending environment setup as explained in this article, the last example is to Mock missing functions on a case by case basis.

//@link https://www.codementor.io/pkodmad/dom-testing-react-application-jest-k4ll4f8sd
//in jsdom environment declaration we can have something like:
global.window.resizeTo = (width, height) => {
global.window.innerWidth = width || global.window.innerWidth;
global.window.innerHeight = width || global.window.innerHeight;
global.window.dispatchEvent(new Event('resize'));
};

//@link https://github.com/facebook/jest/issues/2460#issuecomment-318853180
//Other example of changing other objects
Object.defineProperty(window.location, "protocol", {writable: true, value: "http:"});
Object.defineProperty(window.location, "host", {writable: true, value: "localhost:14187"});
Object.defineProperty(window.location, "port", {writable: true, value: "14187"});

App Shell

The App Shell will be defined as a building block of the application. That includes the index template with all required libraries loaded, styles and service workers.

The App Shell will be responsible to show the right application depending on route requested from the server. Log in should be working under the legacy application, and shared with the latest version.

The Router

The legacy router will not change, as the long as the future application is not done yet.

The ui-router from Angularjs will be referred to as legacy router”, and the new React router will be referred to as router”

The future router will be a part of the upgrade. The frontend side of the new application will have an extra parameter to make sure the server renders properly.

How does the new router look like?

  • The legacy router remains on / route.
  • The future router will be located at /r/. The “r” stands for React.

What is the difference with the legacy router?

  • Legacy router uses $state from ui-router library, as well as $location from ngRoute library
  • The new library has to support parameters
  • Route libraries that can be looked into are reach/router and react’s router.

Reading List

  • The react router is simpler but hard to work with — at least when coming from the Angular world — when it comes to relative routing. To get familiar with some concepts these reads will help you: [1], [2], [3]

The Template

The template section is under heavy editing— You can skip it for now.

Steps to migrate an Angular From Template

The rule of thumb is to start with the easy parts first then move the harder parts next.

Every Component and Directive comes with its own template in Angular. The linking phase attaches the template to actual component/directive.

Components, as well as Directives, have an initialization phase. A special binding construct was used in templates — and indicates when the initialization function would kick-in — in other words hooked into digest cycle phase. That binding is ng-init construct.

Since the initialization code will be a part of React’s componentDidMount() lifecycle hook, It is safe to remove all occurrences of ng-init=”vm.init()” from template files.

The collateral of this approach is $scope.$watch($onDestroy, () => {})will be moved to componentWillUnmount() There is no special binding of a destroyed scope in the templates.

Above two points can be summarized in the following code snippets

componentDidMount(){
/*ng-init, $onInit or ngInit() code will be here*/
}
componentWillUnmount(){
/*$onDestroy() code will be here*/
}
componentDidCatch(error, info){/**/}

The Form’s tags in a standalone template can be copied over to the render() method block. The method will be in a new React form component — and this component can have exactly the same name as the original Angular component. More details on Form migration are provided in the Forms section below.

<form name=”vm.name”/> render() { return <form name=”name”/>;}

Mass replacement tactics such as search and replace can be useful. We have to provide an environment where the collateral is minimal.

The form, like any other template, will need additional tune-ups such as CSS class name, validation and string interpolation, just to name a few.

  • class=”some-class” is renamed to className=”some-class”
  • novalidate becomes noValidate to match the jsx equivalent
  • Template related {{ and }} are replaced with single { and }
  • Changing binding to actual React Model
  • ng-bind=“vm.model.something” becomes {this.state.something}

The state management of MVC frameworks is really different from Flux. This divide reverberates in the way Angular and React manage state.

Since our task is to migrate Angularjs to React, we will also have to figure out a way Angularjs models will be translated into Flux state management. This is when Flux implementations such as Redux or MobX will come in handy.

State transfer from parent to children is done either via props or via shared state management tools such as Redux or mobX.

  • vm.model becomes this.state
  • Replace ng-show="vm.model.show"
  • adding conditional CSS styling
  • className={this.state.show ? “full-class-name” : “hidden”}
  • <b ng-show="vm.model.show"/> may also become <b className={}/>
  • Alternatively, the ng-hide can also have the same modifications

Conditional rendering is tightly coupled to state management — state management is not limited to React since the model is also the state. By conditional rendering, we understand CSS class change based on the state of a fragment, dynamic backgrounds, and images or any DOM mutation based on data provided by a server.

Style — especially on the background image

style="background-image: url({{this.model.item.photo}});"
  • The dynamic image provided by a photo on a given model can be restyled using JSX’s style construct
let backgroundImage = modelItem.photo;//in jsx
render(){ <div style={{backgroundImage: backgroundImage}}/>}

Conditional rendering driven by CSS class change is mostly located in ng-class and can be quite complex. An even more complex use case would be to use a function to calculate the CSS class outcomes. By default most, if not all, such constructs will be moved to className . That way, we can focus on some exceptions or special use cases

ng-class={red: vm.model.isRed, "alternative-class": vm.model.alt} className={
"red"
+ (this.props.show ? "show" : "hidden")
+ (this.props.alt? "alternative-class": "")
}
className={
`red ${this.props.show ? "show" : "hidden"}
${this.props.alt ? "alternative-class" : ""}`
}

Conditional rendering driven by hiding the entire DOM node is mostly located in ng-if constructs. In some cases, this directive was used with dynamic CSS class for a “show ” and “hide” scenarios. Most React apps use JSX as a template engine — which makes it easy to mix code and HTML tags. Our task is not to opine on the practice but to make the migration work. The following construct will cover most use cases

<tag ng-if="vm.mode.show"/>
<tag ng-show="vm.model.show"/>
<tag ng-hide="!vm.model.show"/>
{this.state.show && <Tag/>}
{<Tag className=`${this.state.show? 'show':'hide'}`/>

Since Angular comes with a full-fledged form processing, special directives were used to mimic or even replace native validators. A Quick and dirty trick is to fall back to native validators. It is possible to rely completely on the HTMLFormElementAPI if introducing new libraries looks a tedious task to do. You can always mix both: Using HTMLFormElement API validators and add special use-case validators into a library.

From the above reasoning ng-minlength|ng-maxlength|ng-disabled|ng-required and all other validation directives will become minlength|maxlength|disabled|required

  • ng-required=”!vm.model.name” will also become required
  • ng-disabled should be replaced with disabled={this.state.isValid}
  • Replace the modelng-model is replaced with defaultValue
  • The model has to be attached to the state instead of vm.
  • ng-click=”vm.fn()” is replaced with onClick={this.fn} or onClick={(e) => this.fn()}
  • Save buttons will all have <button type=”submit”/>
  • Forms will have to handle onSubmit event, all submit forms with <form onSubmit={this.handlSubmit}/>
  • That means onClick={(event) => this.save(event)} will be replaced with type=”submit” instead
  • Schedule Attribute Directives for future implementations => capitalize=”@todo [migrate]”

Code Samples Used while Migration

Migration looks like a straight line on paper, but more of the zigzag route in reality. In reality, similar sub-components can be moved in bigger chunks, an example is migrating form templates at the same time. If you take the whole applications as a map, you can curve major components into smaller manageable sub-components that can be migrated more easily

To move a conditional ng-class Template class migration over to React

<AngularTag class="tag" ng-class="{active: vm.isAuthenticated()}" /><JsxTag className={this.isAuthenticated() ? 'tag': 'tag active'}/>
<JsxTag className={this.getClassList()}/>

Parking attribute directives for later updates — especially when not sure about how to go with the directive at hand

<Tag custom-directive/>
<Tag custom-directive="@todo migrate this later"/>

Moving the style attribute

<Tag style="background-image: url({vm.user.picture})"/><Tag styles={{backgroundImage: this.state.user.picture}}/>
<Tag styles={this.styles}/>

Solve other errors as they come in binding to Tag is simplified as in the following example:

<Tag ng-bind="vm.name"/>
<Tag>{this.state.name || this.props.name}</Tag>

Binding ng-model

<InputTag ng-model="vm.model.name"/>
<InputTag value={this.state.name}/>

ng-click is replaced with onClick

<Tag ng-click="vm.toogle()"/><Tag onClick={this.toogle}/> //OR
<Tag onClick={(e) => this.toogle(STAT_VARIABLE)}/>

Link

<a ui-sref="workspace.orders">Orders</a>
<Link onClick={(e) => this.toogle(e) } to="workspace/orders">Orders</Link>

Show/Hide or the Conditional rendering within JSX

<Tag ng-if="vm.name"/>
<Tag ng-show="vm.name" ng-hide="!vm.name"/>
{ !this.state.name && <Tag {...this.state} {...this.props}/> }

MouseEvent

<Tag ng-mouseenter="active = true" ng-mouseleave="active = false"/>
class CustomTag extends Component{
constructor(props){
super(props);
this.mouseEnter = this.mouseEnter.bind(this);
this.mouseLeave = this.mouveLeave.bind(this);
this.mouseOver = this.mouseOver.bind(this);
this.mouseOut = this.mouseOut.bind(this);
}
render(){
return (<Tag onMouseEnter={(e) => this.mouseEnter(e)} onMouseLeave={(e) => this.mouseLeave(e)}/>);
}
}

Router-UI constructs

$stateProvider
.state('checkout', { url: '/checkout', component: 'checkout' })
.state('checkout.item', { url: '/item/:id', component: 'itemCheckout' })
.state('checkout.contract', { url: '/:oid/contract', component: 'contractCheckout'})
.state('checkout.payment', { url: '/:oid/payment', component: 'paymentCheckout'});

Equivalent to React Routers — Checkout Component is responsible to render the right chunk

<Switch>
<Route
exact
props={this.props}
path="/checkout/:item"
component={Checkout} />
<Route
exact
props={this.props}
path="/checkout/:oid/:direction(contract|payment)"
component={Checkout} />
</Switch>

Commonly Asked Questions

ng-click is directly replaced with onClick. The long form of the syntax looks a bit as in the following statements

<Tag ng-click="vm.toogle()"/>
<Tag onClick={this.toogle}/> //OR
<Tag onClick={(e) => this.toogle(STAT_VARIABLE)}/>

Using state with hyperlinks

<a ui-sref="workspace.orders">Orders</a>
<Link onClick={(e) => this.toogle(e) } to="workspace/orders">Orders</Link>

There is a substantial difference between React Router and Angular/AngularJS routing mechanism. Especially in case UI-Router has been adopted in legacy AngularJS apps. UI-Router is a state-based router and can be processed by the Angular+ compiler. This made it possible to use a link with a state that accompanies it.

The React Router is a whole new story. It evolved as the framework matured. The newer versions of React Router (v4+) embrace the React way of doing business. There is a component that is used to tell programmatically where to go.

There is also history object passed to children via props.

evaluateWhereToGo(event){
if(this.state.homepage){ return <Redirect to='/homepage'/>}
else if( this.state.dashboard){ this.props.history.push('/dashboard');}
else if ( this.state.auth) { context.history.push('/auth')}
}

Reading list

Reading list on adding state the Router/Link

To sum up —

  • All Directives that have a template become defacto Components
  • There is no need of template files — all template files have to be moved inside is render() function of the corresponding Component.
  • Attribute Directives will be replaced by a validator mixin
  • Attribute Directives and filters(or pipes) will be replaced by a mixin
  • Every function have its replacement bound to the constructor — or is coming from a mixin library

The attributes Directives are their own kinds. Same as Filters, they can be used to do validation work, but also do formatting work when used with form input. Since our approach is to go with Uncontrolled controller as much as we can, we still can let some input fields that need a formatting, or as-you-type validation has a partially controlled component. We will rely heavily on FormData API. If the browser doesn’t support this specification, we will inject a polyfill.

So it is the time to migrate validation related attribute directives to Validators and formatting related(parsers) to Formatters. These two are going to be located in utils library.

Note: Some Validators are not going to be as used in AngularJS — quick example than comes in mind is Stripe. Stripe has a whole new way to implement in React. There has to be a way to explore this, somewhere in this guide.

The problem with attribute directives, or the problem they solve, is that they integrate with third-party libraries.

Some of these libraries do not have support for ES6 or React. So this may be a setback to moving to React, especially when development is not done in a browser.

There is a [React Script Loader — that helps to mitigate this issue.

Form validations and Error reporting reading list 1 — Instant Form Fields Validation React, 2 — Ana Sampaio’s — Form Validation with (vanilla) JavaScript — on The UI Files blog. 3 — Chris Ferdinandi Series on Vanilla form Validation + A use case on MailChimp form Validation on CSS-Tricks.com

The second vague of migration can be done on attribute directives whore role is to integrate with third-party libraries, such as jQuery widgets, etc.

Managing The State

The role of the state is to keep a memory of everything that happens within the application. There are two distinct parts to take into account here: the model(domain) and properties(for HTML elements).

Angular uses the $scope for both properties and domain. Some applications adopted one way or another to keep model organized in its own way. Which makes it easier to move.

Reading List

The Service Layer

The Service encapsulates business logic and makes it possible to architect a framework independent application. The service layer makes it easy to manage the domain in a framework agnostic way.

The work to migrate to the service layer is beyond migration but also introducing a service layer to react. That will require to rethink service implementation in a more profound way. React doesn’t have an Inversion of Control(IoC) container as Angular does, and that is by design. Nevertheless, adopting a more flexible approach and use alternatives such as Constructor Injection in addition to Service Factory makes sure we make a less painful transition.

To be successful with transition though, breaking away from the legacy framework, removing Providers, Factories and other framework specific constructs such as $log, $q, $auth etc.

Source

There are things that look alike in AngularJS applications. One of those things are services, and factories. That is why in this effort, I merged all services into one giant file. That way to search/replace operations will be faster.

The same giant file can also be tested in isolation. Whenever all services become React Ready, moving each service into its own file can be done as a part of further refactoring.

To put things in perspective, once again, AngularJS services are designed to be the source of truth. The new services will serve mainly as mixins, or as a way to re-use similar functions in various places. The services will not be Singletons as they used to be in Angular, but they will continue to be injected as objects.

The list of challenges to adapt the service layer to react’s realities:

  • Services are designed to hold the state in Angular, React store is designed to hold the state, therefore a source of truth(flux) and the single source of truth when using redux
  • The notion of a model and application state are not well defined in Angular. In fact, the $scope can be used for both models and context.
  • React uses state for the model, and props for attribute properties(and other properties that will not change for component lifecycle)
  • Services were in ES5, which was easier to add static variables, ES6 static variables are not accessed outside the class or sorta
class Settings{
static get SOMETHING(){ return "something";}
constructor(props){}
getSomething(){ return Settings.SOMETHING};
}

You can declare and access to static variables as defined in following link

  • Remove AnyService.$inject = ['StateService', 'AuthService']; from concatenated service file
  • Remove angular.module('app').factory('WebSocketService', WebSocketService); from the giant file
  • Transform services functions into ES6 classes.
  • remove dependencies that are not necessarily needed such as $filter, $http(replaced with axios), $q and $log
  • Make all classes Factories instead, the Services will be exported as ready to use Objects.

Removing dependencies to $q

The $q was a good alternative when there was no native support of Promises. But now we can use native Promise library. The shift will look as following:

function doAsyncTask(){
var deferred = $q.defer();
doSomeRequest(function(err, response){
if(err) return deferred.reject(err);
return deferred.resolve(response);
});
return deferred.promise;
}

Introducing native Promise, the above function becomes as follows:

function doAsyncTask(){

return new Promise(function(resolve, reject){
return doSomeRequest(function(err, response){
if(err) return reject(err);
return resolve(response);
});
});
}

Re-using Actions + Reducers

  • The challenge to implement flux in AngularJS was adapt the concept in a way that actions creators and reducers can be injected into existing classes.
  • The challenge to migrate the whole thing to react is to make sure the classes can be still injected into the service layer of React. React doesn’t have an inversion of control container
//The Actions
function Actions(){
return ADD_SOMETHING: 'add:something'
}
function ActionCreator(Actions){
let service = {addSomething: addSomething};
return service;
function addSomething(){}
}
ActionCreator.$inject = ['Actions'];
angular.module('app').factory('Actions', Actions);
angular.module('app').factory('ActionCreator', ActionCreator);
//The Reducers
function AppReducer(){
return Redux.combineReducers({
thing: function thingReducer(){ /** reducer implementation */ }
})
}
function StateService(AppReducer){
return Redux.createStore(AppReducer, {
thing: defaultThingState
})
}
function Flux(){ return Redux; }
angular
.module('app')
.service('AppReducer', AppReducer)
.factory('StateService', StateService)
.factory('Redux', Flux);

The above code can be easily modified as following

//The Actions
function ActionsFactory(){
return {ADD_SOMETHING: 'add:something'}
}
export const Actions = new ActionsFactory();
class ActionCreatorFactory{
constructor(Actions){
this.addSomething = this.addSomething.bind(this);
}
addSomething(){}
}
export const ActionCreator = new ActionCreatorFactory(Actions);
//The Reducers
function AppReducerFactory(){
return combineReducers({
thing: function thingReducer(){ /** reducer implementation */ }
})
}
export const AppReducer = new AppReducerFactory();
function StateFactory(AppReducer){
return createStore(AppReducer, {
thing: defaultThingState
})
}
export const StateService = new StateFactory(AppReducer);

Removing dependency to $http

  • $http provided a set of utilities that were really invaluable. There is a lesson learned to use it though.
  • The number of users in the application can make it harder to do a migration. Adopting skinny controllers, allowed to move most, is not all, HTTP request into service, and use that service instead.
  • In the same spirit, the new architecture will involve an HTTP service wrapper, that will integrate directly with a third-party library.
  • The third-party library can change, be deprecated or terminated at any time, our service wrapper will stay.

Challenges

  • interceptor that can add proper authentication options on each request
  • Providing basic 4 methods, that are compatible with $http’s POST PUT DELETE and GET

Strategy to integrate Redux to Services

There are two kinds of services, from functionality perspective — for simplicity, I will call them first tier and second tier services.

The first is the first tier service. A first tier service communicates directly with an external service. The first tier service has also to perform essential business logic such as formatting and updating the global storage.

// do the request section 
class FirstTierService(){
doSearchPeopleWork(){
return new Promise((resolve, reject) => {
return doSearchPeopleRequest((err, res) => {
if(err) return reject(err);
let payload = Object.assign({}, {entities: res.data});
//adding all people search result in the global storage
StateService.dispatch(createAddPeopleAction(payload));
return resolve(response);
});
});
}
}

The second tier service, on another hand, uses the first tier service when it comes to communicating with the external world. The second tier service updates the global storage as well.

class SecondTinerService(){
constructor(FirstTierInstance){
this.fti = FirstTierInstance;
}
addPeopleToOrders(params){
return this.fti.doSearchPeopleRequest(params.region)
.then((response) => doSearchOrdersRequest(params.region))
.then((response) => {
//getting orders + people + format association
let payload = formatUserOrderAssociation();
StateService.dispatch(createAddOrderPeopleAssociationAction(payload))
});
}
}

As you can see in this second example, the second tier service uses doSearchPeopleRequest() from first tier service, to make outside the application call. The first tier is also well positioned to integrate third-party communication libraries(WebSocket, axios, etc), in a way that is more abstract.

Reading List

  • To understand more about adding a service layer — [1], [2]

Component

Components are building blocks of not only React apps but also Angular. The big difference between the two frameworks is lack of Directives, which played a major role as an integration point with third-party libraries.

Smart and Dumb are two distinct types of Components. Other names used for Dumb Component are Presentational, Pure, Stateless, UI Components or Simply, Component.

For the Smart Component, other names include Stateful or Container Components. The reason for calling these type of components is they have the truth about where the local state, and knows how to upgrade state(from global state/store).

In another hand, Higher Order Component is more of a pattern, than an actual category of Component. This pattern is used for Composition(some sort of DI).

To structure the application with re-usability in mind, components have to be simple, yet highly compassable. For that, each route will have its own higher level component(Smart). Second level nodes are going to be other Smart Components and third level nodes will be a mix of Smart and Dumb Components. That is Forms are going to be located at a third level most of the time.

Smart Component Candidates

At this point, you may be asking which components can become classic React Components. Here is the list:

  • Controller — Controllers be directly attached to Routes, Components or Directives are candidates to become Smart Components. In case there is no template associated with the controller, adding a brand new template(container) can be used. The role of the smart component listed below, show clearly why these kinds of components are well aligned to replace controllers in MVC frameworks.
  • Directive Business Logic — This tricky part becomes Smart Component’s function/handlers.
  • Directive Event Handling — Event handling, be in Controller or Linking section of the directive becomes functions in Smart Component.

Role of Smart Component

  • They read from a global store, and update local state about relevant updates, and subsequently, to their child components. Reading from global store is done via subscribe operation.
  • They read from children local states, via events and message passing, and update the global store with the latest state.
  • They make requests to external endpoints and update global store if required. The requests are either HTTP or WebSocket in nature.
  • They communicate application state to child components(via state and props).

Dumb Component Candidates

Following are some candidates that can be transformed into Dumb Components:

  • Directives — Angular has three kinds of custom directives: attribute, class, and entity. Directives having a template or serving as an integration point with a third party library will become Dumb Components in React.
  • Templates — The single responsibility principle with React, in part, is applied to the isolation of the representation layer with the logic layer. So all templates become de facto Dumb Components.

Role of Dumb Components

  • They communicate upwards events such as clicks, drop or touch events back to smart components for further processing.
  • If the component is an actual partial(view or layout) of a form, events such as submit or input changes, are also communicated to smart components for further processing
  • The propsare objects used to share the kind of event handlers needed to deal with events from inside a dumb component.
  • The state are objects used to communicate the model part. The model is a data structure.
  • They play the same role as directives when it comes to third-party libraries integration. Events produced by third-party libraries are passed over to dumb component, which in turn forward such events to parent components.
  • They serve as templates, known as layouts in React world.

Directives

Directives can be put into 3 major categories.

The first category is for those directives that have a template and can be easily made components The second category is for those directives that integrate a library into the application The third and last category goes to those directives used to validate or transform an input. Those three categories can be migrated as following:

Template enabled directives are directly transformed into Components Directives that integrate a third party library, are added(via event handling and elRef) The last category become a mixin and used individually using events such as onChange, etc

Attribute Directives with a Link function — quite often look the following form:

$($element)
.find('input[type=text]')
.on('keyup|event_name', _handler);

Can move most of the functions into Component form — that look most of the time as following

SomeComponent extends Component{
constructor(props){
this.keyUpHandler = this.keyUpHandler.bind(this);
}
render(){
return (<input
type="text"
onEventName={event => this.eventHandler(event)}/>
);
}
}

In other cases, there may be a need to change class or other UI parameters using [angular.] helpers. In a matter of fact, most of these constructs can easily be replaced with natively available alternatives to React.

function link($scope, $element, $attrs, $ctrl){
angular.element($element).addClass('some-class');
}

When used inside components, there is no limitation that a CSS class can be determined by the value of a state.

class SomeComponent extends Component{
constructor(){
this.setState({someClass: 'some-class'});
}
render(){
return (<input className={this.state.className}/>)
}
}

We can, however, argue if this is a good practice or not, but it surely does the trick — amongst alternatives in the wild.

But what if the directive integrates with a third party library, such as jQuery plugins? There is an example of how to integrates, in a way jQuery does it job and notify React about any changes it has made. Likewise, It is possible to let React communicate with jQuery library whatever change it has made.

componentDidMount() {
this.$el = $(this.el);
this.$el.chosen();
this.handleChange = this.handleChange.bind(this);
//Delegating on-change handler to
this.$el.on('change', this.handleChange);
}

//Communicate Changes to jQuery
componentDidUpdate(prevProps) {
if (prevProps.children !== this.props.children) {
this.$el.trigger("chosen:updated");
}
}
//signing off
componentWillUnmount() {
this.$el.off('change', this.handleChange);
this.$el.chosen('destroy');
}

handleChange(e) {
//using the change inherited from the caller
this.props.onChange(e.target.value);
}
render() {
return (
<div>
<select className="Chosen-select" ref={el => this.el = el}>
{this.props.children}
</select>
</div>
);
}

Code example form reactjs.org To read more on using ref constructs are discussed here Investigating reasons why it takes longer for browserify to recompile: 1

Adding Third-party libraries and SDKs to React Application

There is no requirement to use third party libraries in a new React Application. However, since we are migrating from a legacy application that may already have a deep dependency on third-party libraries such as jQuery or Socket.io -- SDKs such as Stripe Facebook/Google SDK etc, we should be well prepared to figure out how to proceed.

On the other hand, when writing an application from scratch, you may well find integrations ready to be used with the new React ecosystem, or figure out how to make things work on your own.

In both scenarios, the following section will assist to be successful with the new integrations.

Adding jQuery to React

Since the work environment of React project is aligned more with ES6 — and Nodejs at large, it makes sense to install jQuery via npm. This guarantees that we can later use tree shaking to reduce the size of the bundle. The following command line npm install jquery will be used.

There are some suggestions to load jQuery using a config file — or a custom loader. That is your own call to make, but the idea goes as follows.

The first thing is to create under /utils or anywhere depending on the structure of your application

touch config.js

Other filenames to consider are: loader.js or thirdparty.js, to name a few. You can be as creative as you want. Then share the library via regular export -- this guarantees the transfer from require to module while we wait for the jQuery team to support ES6 module loader.

let $ = require('jquery');
import $ from 'jquery';
window.$ = window.jQuery = $
exports default $

To use the project in a way that allows being built by our custom build task:

import "config"; //alternatively 
import $ from "config";

Reading List — About Adding Libraries to React1, 2

There are a couple of challenges to figure out while working with third-party libraries. There are npm libraries designed to load each service, such as react-ga, react-google-login. The problem using these is obviously the maintenance of the library. Another being adding an additional dependency to the codebase. The quest is to come up with a pattern that covers all aspects of loading a using a third party library, without bloating dependencies.

There was a library ReactScriptLoader that is using Mixins. But Mixins are considered harmful.

  • Adding Facebook Dependency + Initialization
  • Adding Google Dependency + Initialization
  • Adding Google Analytics Dependency + Initialization
  • Adding Analytics Library + Initialization
  • Adding Stripe Library + Initialization
  • Adding Modernizr Dependency + Initialization
  • Service Worker + Initialization
  • Handle Errors At Higher Level
  • Adding Socket.io Support to React

Alternatives we have are:

  1. Use classic <script/> with and specify which resources we want to download. This approach can make it difficult to speed up page rendering due to the number of assets we want to download.
  2. Using existing script loaders — Modernizr has an ES6+ on npm, the same goes to yepnope. This adds an extra dependency, which increases the bundle size of the application. Which is not bad, compared to install every special use-case package.
  3. Creating a custom lazy loader like in this article
  4. Re-using existing loaders, but adapt them to our needs

Reading list on such issues are:

Step-By-Step Guide to Stripe Payments in React ~ The approach used in this discussion relies on mixins. Mixins were deemed to be harmful on ReactJS blog. Declaratively loading JS libraries Authentication with Google, Facebook and Twitter ~ Stated in this blog uses OAuth strategy. Basically, the same technique that was used with Satellizer.

There is an alternative library that worth to look into

Searching for alternative libraries on Github, npm or another open source

  • The first use case is Date Picker — Initially,
  • Contender Candidates for the Date Time Picker — meet some basic standards
  • They have to be similar in Look, Feel and Usability as datetimepicker or better
  • They should also have Few or no dependency, in addition to a lower rate of bugs + a vibrant community/backer around them.
  • Two Contenders are listed below
  • React Day Picker and React Date Picker
  • The first was best from bug count, community and dependency perspective
  • The second is best from Look, Feel and Usability + even though it has the worst bug count, but it has a backer and an active community. In addition, the latest versions removed dependency to moment.js
  • Using Google Analytics to track Errors

Forms

Forms are hard not only with Angular or React, but any framework. Nevertheless, a closer look at Forms reveals that they are a special case Smart Components. Or at least that is how I see them.

Main challenges migrating a form

  • Template — the template is not compatible with JSX. Which can be a tedious task in some cases, but a well-organized search and replace can make this task a bit faster.
  • Form attributes — some form attributes have DOM counterparts, others are simply custom directives. To migrate Custom Directive, it will be on a case by case basis. Custom Directives that have a template, can become de-facto React Components. Attribute Directives can wait till the logic migration part starts.
  • Formatting — There is a whole range of data entry that has to be transformed in a format the server application understands. Quick examples are ZIP, Localized money, Dates from date pickers, etc.
  • Validation —The rule of thumb of form processing is to never trust customer input data. In addition to validation(valid email, or complete phone number, etc), the application is responsible to sanitize data, to avoid vulnerability exploitation such as code injection as well as to make sure customer entered the right data.
  • Error Messages — When invalid data is added, the application should be able to provide timely feedback to the customer and suggest action to take in other to correct the error. Error message in Angular is really advanced and comes in backed into the framework. For react, not so much.

What Is The Plan?

The plan to migrate the forms is simple: divide-and-conquer.

  • Larger chunks of problems are attacked divided into smaller problems.
  • If your program has 100+ forms, all of them has surely something they have in common. Those commonalities can be grouped and addressed at the same time.
  • In my case, the whole application had 25+ forms. I created a Component template that groups all similarities those forms have, and moved them all in one large file. That allows me to test them from a single file, and search and replace becomes easier. As soon as the large forms.js became stable, I would add more new forms from legacy code, and fix only the breaking parts.
  • Make sure all form elements are JSX compatible.

Reading List

  • Migrating forms is one of the hard yet opinionated tasks. Here are key articles that can help you figure out your plan of attack: [1], [2], [3], [4], [5]

read more on react forms on this article.If you want to get more familiar with complex subjects such as validations, this article may help as well.

Form Template

The secret to re-use most of the existing template is simple: start with easy-fast to complete stuff, then move to hard parts as you complete your migration.

React has a notion of pure functional components. These are great starting point to move templates into, while keeping form template easy to test. [1], [2]

Use cases stated below hints on work that goes into re-using forms template. I chose forms because they can cover most cases. The remaining templates are most of the time the listing, which in my opinion, is simpler to implement.

The case of form initialization

  • With a massive search and replace of ng-init="vm.init()" can be done right away.
  • The code inside vm.init(or $onInit) can be scheduled to be transferred to componentDidMount() , whereas the content of $watch$onDestroy(or $onDestroy)can be scheduled to be moved over to componentWillUnmount().
  • In another hand, introducing componentDidCatch(err, info) can be used to collect all error logs

The case of attributes in a form template, also using a massive search and replace

  • Form’s name <form name="vm.name"/> , by simply getting rid of vm., becomes <form name="name"/>
  • To prevent default validation error messages, the attribute novalidate is not only used by Angular but also used in Vanilla JavaScript. To be compatible with the jsx syntax, <form novalidate/> becomes <form noValidate/>.

The case of model in templates

  • Technically speaking, vm.model becomes this.state and the double braces for string interpolation become single braces in react templates. The following two examples show the transition.
<label>
<input ng-model="vm.model.name"/>
{{vm.model.name}}
</label>
<label>
<input defaultValue={this.state.name}/>
{this.state.name}
</label>

Theng-model is replaced with defaultValue. This helps to keep value initialization outside of value attribute.

The case of CSS class names

In addition to regular CSS class tags, Angular supported the notationclass='some-class'. This is not the case for React though. Therefore all will be renamed to className="some-class"

Conditional CSS class name

Conditional CSS class name used to use rather complex using a special directive ng-class

To get started, adding conditional class name can be achieved usingclassName={this.state.show ? 'full-class-name' : 'hidden'}

More complex use cases such asng-class="{red: vm.model.isRed, 'alternative-class': vm.model.alt}" can be achieved either using one of following constructs

let show = this.props.show;
let alt = this.props.alt;
<tag className={"red"+(show ? ' show':' hidden')+(alt?' alt':'')}/>
<tag className={`red ${show?' show':' hidden'}${alt?' alt':''}`}/>

The case of dynamic styling

Style — especially on background image can go from<tag style="background-image: url({{this.model.photo}});"/>to be transferred in render() function as following:

let backgroundImage = this.state.photo;
return (<tag style={{backgroundImage: backgroundImage}}/>);

The case of conditional rendering

The ng-hide and ng-show were used intensively to dynamically display or hide certain components/tags. A more efficient way to display or hide some components/tags is ng-if. These two concepts can be planned and migrated near the same time.

The next snippet shows how Angular deals with conditional rendering

<tag ng-if="vm.model.show"/>
<tag ng-show="vm.model.show"/>

The above two examples can be replaced with either class-based rendering or, conditional rendering as following:

{this.state.show && <Tag/>}

Replace some Angular validators with DOM validator attributes

  • Some custom directives were modeled after the DOM API. In most of the cases to abstract the use of other browser-specific API to display errors while providing feedback to user input. Some of these custom directives are listed in the snippet below
<div>
<input
ng-minlength="9"
ng-maxlength="20"
ng-required="!vm.model.name"/>
<button ng-disabled="!vm.model.name"> Save</button>
</div>
  • The above construct can be changed to simpler DOM based validator: minlength|maxlength|required.The problem will be how to collect errors and display those errors, or have conditional renderings based on the above-mentioned validator.
  • ng-disabled should be replaced with disabled={this.state.isValid}

The case of form event handling in templates

  • ng-click="vm.fn()" is replaced with onClick={this.fn} or onClick={(e) => this.fn()}
  • Form migration will use more of JavaScript than React
  • Save buttons will all have <button type="submit"/>
  • Forms will have to handle onSubmit event, all submit forms with <form onSubmit={this.handlSubmit}/>
  • That means onClick={(event) => this.save(event)} will be replaced with type="submit" instead

Attribute directives that cannot be migrated right away, can be scheduled for future implementations.

  • The tag <input type='text' capitalize/>, in which the “capitalize” attribute capitalizes input as user types, can be replaced with <input type=capitalize='@todo[migrate]'/>.

All other directives that have a template become react components.

  • Attribute Directives will be replaced by a validator mixins
  • Some other Attribute Directives and filters(pipes) will be replaced by a parser mixins
  • Then every function can get its replacement added to the constructor

The attributes Directives are their own kinds. Same as Filters, they can be used to do validation work, but also do formatting work when used with form input. Since our approach is to go with Uncontrolled controller as much as we can, we still can let some input fields that need a formatting, or as-you-type validation has a partially controlled component. We will rely heavily on FormData API. If the browser doesn’t support this specification, we will inject a polyfill.

So it is the time to migrate validation related attribute directives to Validators, and formatting related($parser) to Formatters. These two can be moved to a dedicated library, most commonly located under utils.

Some Validators are not going to be as used in AngularJS — quick example than comes in mind is Stripe. Stripe has a whole new way to implement in React. There has to be a way to explore this, somewhere in this guide.

The problem with attribute directives, or the problem they solve, is that they integrate with third-party libraries. Some of these libraries do not have support for ES6 or React. So this may be a setback to moving to React, especially when development is not done in a browser.

There is a React Script Loader that helps to mitigate this issue. Form validations and Error reporting reading list [1], [2], [3]

The second vague of migration can be done on attribute directives whore role is to integrate with third-party libraries, such as jQuery widgets, etc.

Form Processing

Forms are at the same time interesting and hard to work with.
React is a library, as opposed to a framework, provides simple baselines for form processing, but not as complex and feature-rich, as Angular or Backbone.

The workaround to make the job easier is to think a form as a Component.
To kill two birds with one stone, the use case of template migration is going to be made on forms.

In following paragraphs though, the stress is gluing together the Form Templates with Form business logic, with stress on re-usability of one Form Template across multiple use cases.

Two Type of Form Components

Gosha Arinich made a good case about Controlled vs Un-Controlled forms inputs and their validation in this blog post. The React Documentation website provide quite useful information to get started as well.

This section layouts a strategy to follow, in addition to examples, to make a well-educated choice of which type to use in your code migration.

Un-controlled ~ the validation of input is done in one place and is basically done when the form is submitted

  • From form initialization to react Lifecycle
  • Forms Error management via React error lifecycle
  • Forms Error messages using react conditional styling technique
  • Form models to React local state

The controlled forms ~ the validation of input, state and error management is done with the caller of the Form. This is the closest you can get, and more, when it comes to form processing with React, keeping in mind that the library was not initially designed a full-fledged framework — but rather a UI composition library.

I choose the controlled form, as the controlled is obvious and less practical when it comes to complex form processing.

Forms — Rule of Thumbs

Props down, events Up — Technically speaking, the Form Parent Component sends Down the state down to forms. Form Elements Do Validation and hand the result(local state) to callbacks.

class Widget extends Component{
//...
handleFormSubmit(obj){
if(obj.error){ return this.setState({error: obj.error})}
Service.updateOrCreate().then(r => this.setState({input: r.value}));
}
render(){
//Props down + Events Up
return <Form input={this.state.input} onInputChange={this.handleInputChange} onFormSubmit={this.handleFormSubmit}/>
}
}

Form delegates the state upwards ~ This is also known as “Lifting State Up”.

class Form extends Component{handleFormSubmit(event){
if(this.state.error){
this.props.onFormSubmit(this.state);
}else{
this.props.onFormSubmit(event);
}
}
handleInputChange(event){
/**
* @todo this will be done by the caller
* if(isValid(event.target.value)){
* //local state|not the global state || validation can well be done by the caller
* this.setState({error: {input: true}});
* }
*/
//if just
this.props.onInputChange(event);
}
render(){
return (<form onSubmit={this.handleFormSubmit}>
<input onChange={this.handleInputChange}>
<button onClick={this.handleFormSubmit}> Submit </button>
</form>);
}
}

Delegate Validation + State Change or Committing changes to the Server via Handlers.

class Widget extends Component{onInputChange(event){
if(isValid(event.target.value)){
let custom = 'custom message';
let error = {element: true, message: custom};
return this.setState({error: error});
}
//add operations such as cleanup, server sync etc.
this.setState({input: cleanInput(event.target.value)});
}
}

There is an additional way of form processing, which involves including adding an isDirty flag— You can read more from this blog.

To force Parent/Callers to implement Event Handlers (for events up phase

class Form extends Component{
componentDidMount(){
let _message = this.constructor.name;

if(!this.props.onFormSubmit)
throw `@${_name} props#onInputChange() is expected`;
if(!this.props.onInputChange)
throw `@${_name} props#onInputChange() is expected`;
}
}

The above setup also resolves the question of how to pre-populate the forms

Reading list

Authentication

The first approach to authentication is to keep legacy code authentication up and running. It is possible to share session token with future code — when running from a similar/same URL.

This solution works if the legacy runs on http://url.dev and new code running on http://url.dev/next for example.

Reading List

  • Authentication is not easy, You can read more on that here [1] for a better understanding.

Dynamic Imports with React and moderns JavaScript Frameworks

The problem: How to dynamically download third-party libraries in React Components?

The solution to this problem has to be generic enough — in such a way it can be integrated in any other JavaScript application.

This reason this issue is still a problem is the fact that dynamic import() is subject to CORS restrictions, therefore not applicable to common use cases ~ See example on JSPM solution pitch.

The use case of Google analytics and Facebook authentication, there are libraries such as react-gathat are ready to use.

The approach of relying on third party library alternatives always come with concerns such as maintainability of the library, lazy-load, security issues, the impact on the bundle size(budget), just to name a few.

Thoughts

Reading list

The testing part of moving to react has not been covered in the above segments. Next readings compensate on this flaw.

  • Migration of all existing tests can be a hard thing to do. Next reading can make it easy if your legacy was tested on a Mocha stack ~ Testing React Web Apps with Mocha
  • Templates are easily transferable and can be tested along the way. Next reading introduces you to testing React Components ~ Testing React components with enzyme and Mocha
  • Some more complex logic such as uploading files, authentication or scroller can be intimidating. These articles can help you to get familiar with those concepts: [1]

Business Logic + DI/IoC

Experience working on the same subject from other people.

Outro

There is no single way to do the migration from any framework to another framework. It all depends on a case by case requirements. This article took the migration from a practical approach, with a remix of what worked for other people. I hope you come up with your own re-mix as well.

--

--

Pascal Maniraho
Simple
Editor for

Web lover, code crafter, beer drinker, created http://hoo.gy, Montrealer, and training to run a half-marathon :-)