How to make a performant video comparison component using React and CSS in JS

Edgemesh Design
Edgemesh
Published in
8 min readApr 24, 2017

--

When we were in the design stages of our website and we were still trying to figure out a way to explain our product, we created 2 animated videos that illustrated the difference between a traditional network and our mesh network. We think the videos do a pretty good job of demonstrating edgemesh’s capabilities, however an issue arose when we went to lay it out.

How do we display this content while effectively conveying our message?

Stacking it side by didn’t make sense, because we wanted to display the full resolution of the video.

Stacking the videos one on top of the other made the information lose a bit of context and in general just took up too much space.

The only other option was overlaying the videos seamlessly; directly one on top of the other and being able to somehow interactively mouse between them.

This task presents us with a few challenges:

  • Preparing your content
  • Overlaying 2 video elements
  • Using the onMouseMove event to determine mouse position
  • Figuring out how we are going to animate
  • Putting it all together, and making it performant
  • Making it work on mobile

Preparing your video content

The first thing you will want to do is prepare your video content. I personally recommend to get your video content into at least 3 formats.

Recommended video content formats:

For performance reasons, I will typically take my source footage and I will first convert it to a .webm. This format was developed by Google to be extremely lightweight and create high quality, low file size videos. It does its job very well, but is not compatible with all browsers. More information about the .webm format can be found here on its official Google developer page.

Typically I will use the .webm as my primary source, and fallback to .mp4 if the browser does not support .webm

The .gif only needs to be generated if you plan on mobile compatibility (more on that later.)

You can use a service like Cloudconvert to do your video conversions. It supports all the formats mentioned above.

Overlaying 2 video elements

We will start off by creating a simple class extending the generic React Component. Inside of our component, we will define some props (basically just the paths to our 2 videos) and we will render both <video/> containers.

We will also start defining some styles above our class. For this example, I use aphrodite inline style library. You can use traditional CSS or another library, but for the sake of simplicity, examples will be written with aphrodite.

const styles = StyleSheet.create({
container: {
position: 'relative',
overflow: 'hidden',
minHeight: 200
},
topVideoContainer: {
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: '100%'
}
});
export default class VidCompare extends Component {

static propTypes = {
video1webm: PropTypes.string,
video2webm: PropTypes.string,
video1mp4: PropTypes.string,
video2mp4: PropTypes.string
};
static defaultProps = {
video1webm: 'video-1.webm',
video2webm: 'video-2.webm',
video1mp4: 'video-1.mp4',
video2mp4: 'video-2.mp4'
};
render() {
let {
video1webm,
video2webm,
video1mp4,
video2mp4 } = this.props;
return (
<div
id="vid-compare-container"
className={css(styles.container)}
ref={ ref => { this.container = ref } >
<video width="100%" height="100%" autoPlay loop>
<source src={video1webm} type="video/webm"/>
<source src={video1mp4} type="video/mp4"/>
</video>

<div className={css(styles.topVideoContainer)}>
<video width="100%" height="100%" autoPlay loop>
<source src={video2webm} type="video/webm"/>
<source src={video2mp4} type="video/mp4"/>
</video>
</div>
</div>
);
}
}

Using the onMouseMove event to determine mouse position

Now at this point we are not completely sure how we are going to animate this, but one thing is for certain: we know we will need to get the mouse position relative to the video that we are moving our mouse over. To accomplish this we will use the onMouseMove event and create a function to handle mouse movement which we will aptly name _handleMouseMove (we like to designate private component functions with an underscore).

export default class VidCompare extends Component {

_handleMouseMove(e) {

// First figure out how far from the left side of the browser
// the video is. This will be the offset that we use in the next
// step.
let left = this.container.getBoundingClientRect().left;

// We will use the synthetic event that is passed through to
// figure out the pageX of the mouse; which is the mouse's X
// position on the whole page. Then we simply subtract
// the left offset to get the actual position value of the mouse
// relative to the image.

// Please note, we can also check to see if the synthetic event
// has any touches on it. This would indicate a mobile touch
// device, which we handle accordingly using a conditional
// ternary operator.

this.setState({
pageX: e.touches ? e.touches[0].pageX - left : e.pageX - left
});
}

constructor(props) {
super(props);

// Bind mouse move handler in the constructor, for performance
// https://facebook.github.io/react/docs/handling-events.html

this.handleMouseMove = this._handleMouseMove.bind(this);
}

// We will plug our mouse handler in to the top most container,
// on both the `onMouseMove` and `onTouchMove` events (for mobile
// compatibility.)
render() {
let {
video1webm,
video2webm,
video1mp4,
video2mp4 } = this.props;
return (
<div
id="vid-compare-container"
className={css(styles.container)}
ref={ ref => { this.container = ref }
onMouseMove={this.handleMouseMove}
onTouchMove={this.handleMouseMove}>
....
</div>
);
}
}

Figuring out how we are going to animate

Okay, so we have our two videos overlaid one on top of the other and our mouse position relative to the video, now what? Let’s take a moment to visualize what we are trying to achieve. Basically we want to crop the top video based on our mouse/touch position to reveal the video underneath.

First we will start off taking the mouses X position, which we figured out in the previous step, and apply it as a translational X value to the Video A Container (that's the video that is absolutely positioned on top of the other video.) This should move the Video A Container and the Video A inside the container with the mouse cursor to reveal the Video B underneath.

We can then take the mouse position and reverse it by multiplying it by -1 to get the opposite direction. We then take this opposite direction value and plug it in to the translational X of Video A inside of the Video A Container making sure that the Video A Container has style property overflow: hidden causing it to clip Video A as it travels outside of its bounds.

When we combine both animations, they end up sort of cancelling each other out and we get the desired effect, hooray! Now let’s put it all together and make it work on mobile as well!

Putting it all together, and making it performant

Now that we have a good idea of where all the pieces will be moving, we can begin putting it all together. There are numerous ways we can do the actual animation using CSS properties, but I recommend using transform: translateX() for several reasons. Transform will always give a smoother experience on mobile and it abuse lower end CPUs. Transitioning the left property will lead to extreme stuttering in animation on mobile devices.

const styles = StyleSheet.create({
container: {
position: 'relative',
overflow: 'hidden',
minHeight: 200
},
topVideoContainer: {
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: '100%'
}
});
export default class VidCompare extends Component {

static propTypes = {
video1webm: PropTypes.string,
video2webm: PropTypes.string,
video1mp4: PropTypes.string,
video2mp4: PropTypes.string
};
static defaultProps = {
video1webm: 'video-1.webm',
video2webm: 'video-2.webm',
video1mp4: 'video-1.mp4',
video2mp4: 'video-2.mp4'
};

// Notice we have added some state to keep track of everything,
// the `started` state will control the animation
state = {
pageX: 0,
started: false
};
_start() {
this.setState({
started: true
});
}
_stop() {
this.setState({
started: false
});
}
_handleMouseMove(e) {
let left = this.container.getBoundingClientRect().left;

this.setState({
pageX: e.touches ? e.touches[0].pageX - left : e.pageX - left
});
}

constructor(props) {
super(props);
this.handleMouseMove = this._handleMouseMove.bind(this);
this.start = this._start.bind(this);
this.stop = this._stop.bind(this);
}

render() {
let {
video1webm,
video2webm,
video1mp4,
video2mp4 } = this.props;


// Default states for when there is no mouse hovering
let transform = { transform: 'translateX(100%)' };

// The inverse of the transform
let transformNegative = { transform: 'translateX(-100%)' };
// Use the pageX state as the translate value when 'started'
// state is true
if (this.state.started) {
transform = {transform:`translateX(${pageX}px)`};
transformNegative = {
transform:`translateX(${pageX * -1}px)`
};
}
return (
<div
id="vid-compare-container"
className={css(styles.container)}
ref={ ref => { this.container = ref }
onMouseMove={this.handleMouseMove}
onTouchMove={this.handleMouseMove}
onTouchStart={this.start}
onTouchEnd={this.stop}
onTouchCancel={this.stop}
onMouseEnter={this.start}
onMouseLeave={this.stop}>

<video width="100%" height="100%" autoPlay loop>
<source src={video1webm} type="video/webm"/>
<source src={video1mp4} type="video/mp4"/>
</video>

<div
style={transform}
className={css(styles.topVideoContainer)}>

<video
loop
autoplay
style={transformNegative}
width="100%"
height="100%">
<source src={video2webm} type="video/webm"/>
<source src={video2mp4} type="video/mp4"/>
</video>
</div>
</div>
);
}
}

You may have noticed that we plug the transform values in to the style prop instead of the className prop. That's because we don't want aphrodite to create a new stylesheet on every single state update. Various css-in-js libraries handle this in different ways. For example, with Radium you would put everything in to the style prop.

Making it work on mobile

Through out this tutorial, we have done our best to keep mobile in mind. Our mouse movement handler can detect mobile touches and we are using transform: translateX() instead of left. This should create a smooth animation on most mobile devices. However, we run into a small problem when attempting to render this component on mobile.

MOST MOBILE DEVICES WILL ONLY PLAY ONE VIDEO AT A TIME

Yup, and even worse, video autoplay is disabled in most mobile browsers due to battery saving measures. Even though mobile browser vendors are starting to see the light (Apple OS allows for muted video to autoplay on a browser without a user gesture) there is still only one way to make your component bulletproof on most mobile devices.

GIFS, SWEET WONDERFUL GIFS!

By converting our video to .gif we can get around the pesky <video/> limitations on most mobile browsers. You can use a library like is.js to do a to detect when the user is on a mobile browser and switch out your <video/> tags with <picture/> or <img/> tags accordingly. Just don’t forget to drop down your video’s resolution for mobile. Otherwise your will end up with massive image assets.

Until Next Time,
The Edgemesh Design Team

--

--