Responsive design for enterprise sites

Oliver Lindberg
net magazine
Published in
9 min readNov 30, 2016

Matt James explains how you can deliver a modern, performant site when you don’t have access to resources like Grunt and Sass

The current frontend landscape is a sea of build tools, frameworks and complexities. These resources often aim to simplify the workflow by automating any redundant development tasks, but it’s important to remember that they are not accessible for many developers. For those of us behind a proxy or firewall in an enterprise setting, these tools are either impractical or unavailable due to restrictions, security concerns or other organisational hurdles.

How then to handle the task of turning around an aging site to meet the needs of the current multi-device landscape when these time-saving tools aren’t an option? What’s more, how should we approach required content while still providing an efficient and performant platform for the user?

We tackled these issues during our work on Wells Fargo Advisors’ website. Faced with some very real limitations, our team put together a patchwork of available tooling and creative code solutions to deliver a product that respects the user, meets the content requirements and complies with the organisation’s security guidelines.

Old sites In early 2015, the Wells Fargo Advisors site was looking much as it had in 2009

Performance and knowledge

Before getting started, it’s important to note that any process we used would need to respect a number of concerns. Primarily, the site architecture needed to be usable for our content publishers.

That meant relying on Sass beyond the initial build was not sustainable. Similarly, a technique like Critical CSS was out of reach, because it would be too complex for those less versed in frontend development. Whatever product we put in place would need to be maintainable by someone with a basic knowledge of web languages, and remain performant based on our initial work.

Practical challenges

When the designs for the site arrived, they introduced a set of complexities that we, the development team, would need to overcome to bring the product to life.

First among those challenges were nine desktop PSDs meant to represent a 90-page responsive site. We needed to get those comps into the browser as quickly as possible and represent the mobile behaviour so stakeholders could view the product in a real-world setting.

The second issue was that every page contained a large, full-width hero image that could easily eat up precious kilobytes of users’ data if it was not implemented responsibly.

The third major issue was our implementation of web fonts. For a variety of reasons, we were required to host and serve very large web font packages (plural). Even on a good connection, these could take seconds to render. On a bad connection, they bordered on unusable. Finding a solution to improve user experience and get content on the page before the fonts had fully loaded was crucial.

Let’s unpack these challenges individually and look at the ways our team was able to overcome each of them.

Getting comps in the browser

The first issue was to get the comps in the browser. There are many tools to do this, but we have the luxury of working with Adobe products, so the Edge Reflow CC programme was a godsend. Despite the fact that it is no longer being actively developed, it is a fantastic tool that is still available in the 2014 and 2015 versions of Creative Cloud.

With a properly organised PSD, Reflow will create a structured HTML document and output a CSS file to support it. While the CSS is not ideal for production, it gets the design out of Photoshop and into the browser quickly and efficiently. Once the base design is in place, Reflow allows you to add breakpoints in the interface and reorganise the content, and then it updates the CSS automatically. Suddenly, it’s possible to socialise a working prototype in a real world setting and demonstrate responsive behaviour at the same time.

Edge Reflow This design software enables users to create HTML documents and supporting CSS files

Working with Prepos

While we’re on the subject of tools, let’s talk about build tools. As amazing and useful as the Grunts, gulps and Broccolis of the world are, they are sometimes not a viable option behind a corporate proxy. Often, it’s a hard sell to introduce open-source JavaScript into a code base that might interact with potentially sensitive client information. Without these tools, it takes a little sleuthing to piece together a workflow that provides the essentials: Sass preprocessing, minification and concatenation of scripts, image optimisation and browser syncing.

There’s a lot of capability to be had in a robust text editor. However, there are areas where a build tool can really pick up the slack, and there are a few GUI options that do 85 per cent of the tasks that most people use command line tools for.

Our team settled on Prepros. It’s a wonderful tool that does all the things you’d expect: Sass processing, image compression, browser syncing with device testing, minification and concatenation, autoprefixing, and so on. If you’re into Jade, HAML or CoffeeScript, it’ll process those files. Even better,
it has Babel.js built in and will transpile ES6 into ES5. Best of all, it’s cross-platform-compatible. That meant those working on Windows 7 could use the same tool as those on Macs. A license is around $30 — pretty cheap considering all it can do. Plus, it has an unlimited trial period.

Prepos This GUI tool tackles tasks like device testing and image compression

Nested media queries

Using these tools, we can call on the power of Sass’ nested media queries to take the desktop styles and back-roll them into a mobile-first style sheet. First, as an example, we’ll take the base styling for the right rail:

pageSidebar{
display: flex;
flex-direction: column;
justify-content: flex-start;
width: 25%;
flex: 0 0 25%;
padding: 0 1em;
}

Then as mobile designs become available, we’ll pull the styling for larger screens into nested media queries and leave the constants alongside the newly determined styling for small screens:

pageSidebar{
display: flex;
flex-direction: column;
justify-content: flex-start;
flex: 0 0 auto;
padding: 0 1em;
@media screen and (min-width: $bpBig){
width: 25%;
flex: 0 0 25%;
}
@media screen and (min-width: $bpSmall) and (max-width:
$bpBig{
flex-flow: row wrap;
justify-content: space-between;
padding: 0 1em 0 0;
}
}

The same thing is done for every module on the site. Once processed into CSS, the styling for each module is grouped together in the output CSS, and all devices are covered. Don’t sweat the repetition of media queries throughout the final CSS, as gzip compression on the server largely eliminates the redundancy.

Now we’ve got our design into the browser, added some responsive behaviour and met our basic build needs, let’s look at how we’re going to handle the issue of the resource overhead.

Resource overhead Even with notable content and resource overhead, it’s possible to deliver a performant experience

Tackling heavy images

First up are those hero images. Like the ballast on a hot air balloon, hero images have the potential to drag the whole site to a halt if not handled properly. For mobile devices, it’s easy to reach for an image {display: none;} for a given viewport, but thanks to the browser’s pre-fetching of images, the user pays for the request anyway. Better to lazy-load the image through JavaScript based on screen size. Small-screen browsers will never even make the request, saving a good chunk of data and speeding up loading of the rest of the content.

This is the approach we took to conditionally loading those hero images. There are, indeed, several different ways to do this, but this is the
approach that suited our needs:

<div class=”heroImg” data-src=”/images/mvp/banners/3–1_
banner”><! — all the other top body necessities →</div>

This div would contain everything that would go in the hero section of the page: the H1, callout text and call to action. Within the opening div tag is a data-attribute that has a truncated file path to the image we want to load if the screen is above a given size. The script below pulls that data attribute and uses it to generate the image and its relevant attributes:

function insertImg(){
var bannerImg = document.getElementsByClassName(‘her
oImg’);
if(window.innerWidth >= 750 && (bannerImg.length > 0)){
var imgPath = bannerImg[0].getAttribute(‘data-src’);
var hero = document.createElement(‘img’);
hero.src = imgPath + ‘.jpg’;
hero.srcset = imgPath + ‘@2x.jpg 1.5x, ‘ + ‘@3x.jpg 2x’;
hero.alt = “ “;
bannerImg[0].appendChild(hero);
}

Once it has created the image, it appends it to the div. This function is called on load, so as the page loads, the script checks the screen size and then creates the image if it is needed. Small-screen users get no image (and no image request) and mid-to-large-screen browsers are provided with three potential resolutions to choose from via the srcset attribute.

For our purposes, we were doing simple fixedwidth resolution swapping, but this function could be extended to accommodate the more robust srcset with the sizes attribute and the w descriptor.

Image compression

Before we even serve the images, it’s going to take a bit of work to generate the appropriate sizes for the srcset. Hopefully, you’re doing a good job with
Save For Web in Photoshop, but there are additional savings to be had after the fact.

Prepros has a built-in lossless image compression feature, but by far the biggest gains for us came from an online tool called Compressor.io. This tool provides lossy compression and made a huge difference in file size. Admittedly, it’s a bit laborious to upload and download each image individually, but seeing a 200k banner image cut down to 41k makes it well worth the time.

Image solution Compressor.io reduces the size of the images on your site, without affecting quality

Serving web fonts

Finally, let’s take a look at the font situation. For reasons outside our control, we were required to implement two separate, self-hosted web fonts in a manner that was, well, less than efficient. For the body text, a JavaScript file is required first, which then adds a series of classes to the <html> tag and generates a CSS file that uses those classes to apply the font.

Obviously, this could take a while, even under ideal circumstances. To deal with this, we lazy-load the necessary files with JavaScript, allowing the browser to display system fonts in the interim while it parses the web font package. It leads to a flash of unstyled text, but we felt it was far better to have content on the page almost instantly.

Our solution looked like this. First, we set up a function to lazy-load render blocking resources to the head when called:

function loadResource(resource, src) {
var type = “”;
switch (resource) {
case ‘text/css’ :
var item = document.createElement(‘link’);
item.href = src;
item.rel = ‘stylesheet’;
item.type = resource;
break;
case 'text/javascript' :
var item = document.createElement('script');
item.src = src;
item.type = resource;
}
document.getElementsByTagName('head')[0].
appendChild(item);
}

Next, we combine that function for our heading font alongside an Ajax call to the script for our copy font:

function fontLoader() {
loadResource(‘text/css’, ‘/css/mvp/archerMvp.css’);
$.getScript( ‘path_to_web_fonts.js’, function( data,
textStatus, jqxhr ) {
try { Typekit.load(); }
catch (e) {
console.log(e)
}
});
}

This prevents the font dependencies from blocking rendering, providing the user with content almost immediately.

Responsive results

That’s an awful lot of hoop-jumping to arrive at the same place many of you with command line tools could automate down to a fraction of the time. However, it shows that resources are out there to get the job done even with these limitations. Our load times are very respectable: on a wired connection, the first visit is 3.00 seconds, and for 3G the first visit is 5.65 seconds. Not bad, considering what we were up against.

If you’re in a more typical development environment with a more refined workflow, you might have read this process and felt the need for a shower. While this approach is certainly more work than the automated alternative, it reaps the same rewards.

In some ways, the limitations of an enterprise setting can be almost liberating. There’s no stress about whether you need to switch to gulp because that’s the thing now, or concern yourself over which framework is the right one for your project. You work with the tools available and fall back on a strong understanding of the basics. In a period of extensive dependencies, it’s comforting to know that it’s still possible to deliver the most efficient experience possible through core languages and scrappy resourcefulness.

Matt James is a CX interaction designer at Wells Fargo Advisors.

This article originally appeared in issue 282 of net magazine

--

--

Oliver Lindberg
net magazine

Independent editor and content consultant. Founder and captain of @pixelpioneers. Co-founder and curator of GenerateConf. Former editor of @netmag.