How I built Surf Status

The mechanics of a node.js surf report

Surf reports today are packed with unnecessary information.

With SurfStat.us, my goal was to create a surf report for users that served two simple things: the current surf and weather.

There were three basic principles I kept in mind as I built the app:

  1. simple to build,
  2. accurate with relevant, up-to-date information,
  3. and quick for a user to get what he/she needs.

This article is a case study on how I was able to accomplish them.

Simple

The surf report needed to be simple to build. I wanted it up and running as soon as possible. It also needed it to be aesthetically simple. The issue with existing reports is their overabundance of irrelevant information presented to a given user.

Sketching

An app’s general layout can be established with a few basic sketches. The ease of having an idea and getting it down immediately is invaluable.

During this phase, ideas are tossed around cheaply and thought over broadly. You can informally share sketches with others and get feedback on them moments later. You spend only minutes on working out an idea rather than hours or days on building it out.

Just don’t expect pen-and-paper sketches to make sense in hindsight. Beyond those first sketches, you should expect final products to take on iteration over time. Things change as you work the kinks out in your prototypes.

Prototyping

When it comes to prototyping, the text editor and browser serve as great tools for getting a project to a minimum viable product (MVP). Some refer to it as designing in browser. To make this work, you need to be comfortable with making design decisions in your text editor.

I compare prototyping to getting to the ugly first draft (see this book) when writing. Write a first draft knowing you are going to have to go back and fix things up. The goal is to land on something that nearly makes sense and trust it’ll improve when you give it further time.

Keep in mind, design tools are out there to help you work out some of the finer details of a section. You could design the look of specific components in Sketch. Use whatever tool you’re most effective at. Remember that the text editor is just another tool.

A prototype in browser gives you an idea of what the product will actually look like and—more importantly—how it will behave in different devices. You can resize a browser to see how it looks on different viewports. With a tools like Gulp + BrowserSync, you can pull it up on your phone and hold it in your hand.

Styling

For styling, I used PostCSS. I prefer this CSS plug-in system for its shorter compilation times—I just really don’t need all of the SASS library for most of what I do. Short waits for code compilation means I am more efficient with rehashing ideas and moving on to the next thing. If PostCSS is your flavor, check out Lost grid for a great, browser-compatible layout system.

Other visual libraries include d3.js, which was used to produce the SVG tide graphs. I try to encapsulate my javascript library trial-and-error within Codepen. The idea is that if I’m unhappy with the outcome of one, I’m able to quickly test and compare it with another.

Accurate

A surf report also needs to be accurate. It should be regularly updated with data from reliable sources.

Surf data

Getting proper surf data took a bit of time. The first useful API to pop up was the Magic Seaweed forecast API. However, it wasn’t until I was just about ready to launch my MVP that I read something the finer of their print:

“… the API isn’t designed for the building of applications that replicate core functionality found on the MSW website and we reserve the right to terminate applications that do this.”

After some thought, I realized that a surf report is essentially exactly the functionality they offer on MSW. Thus, I scrapped the use of it.

At some point, I remembered Pat Caldwell’s NOAA surf forecast. It is locally known in Hawai‘i to be one of the more accurate surf reports, based on the organizer’s surf experience as well as its basis with trending buoy data.

The problem was that I only knew it to be served as semi-plain text on their website. Then, I noticed something on the forecast—a regularly updated XML file with everything I needed.

From there, I parsed the XML into JSON, and reformatted its plain language into useable data points.

Weather and tide data

For the weather and tide data, I used OpenWeatherMap and WorldTides respectively. Each of the three data sources are polled at appropriate rates, of which were determined by the nature of the data. For example, the tide is fairly predictable and can be pulled in once a day. Weather, on the other hand, can change within a few hours and was set up to be polled far more regularly.

Quick

Given the nature of today’s mobile user, a surf report needs to be quick. A user should be able to immediately find the surf height she wants. The information she wants should be in view as quickly as possible.

Visual pacing

In short, when a user pulls up a surf report, she should see the surf. I was striving to have all surf heights appear in the browser on first load, for both desktop and mobile.

Mobile and desktop views.

Immediately relevant to a ocean-bound person is more data — such as a tide pattern, wind direction, or sun schedule — that could be accessed by scrolling down or opening a dropdown (on mobile).

Loading speed

To limit load times, I used Express as a web server framework. Its minimal, yet scaleable nature made it seem the best choice. It’s hard to dismiss how easy it was to pick up and manipulate. I was able to push something to production, learn from it, and adjust easily.

At first the surf report was built to request the three separate API’s when a user arrived at the website.

Bad situation

This proved problematic. If a single API request had gone awry, the app would come to a standstill.

To ensure users experienced a speedy and consistent view when loading the app, I built a cache. I hooked up several CRON jobs that ran on the server. Each would send out a request to a given API at an established interval, then dump the data into a MongoDB database.

Good situation

With this in place, when a user accessed the web app, their request would hit the cache, versus a series of API’s.

Admittedly, there is a scenario where a user could get stale data if the previous API request had led to an error. He would be see data from the previous non-faulty ping. In a way, this works with wave heights as they tend to change increase or decrease over time. Also, this would immediately get solved on the next successful request.

I also imagined that hooking up to a local, reliable MongoDB database would lead to improved loading times on the client side. This was indeed the case, but not quite as much as I had imagined:

There you have it, a 0.21 second difference.

Sometimes, when you spend hours learning a new way to structure an application with client savings in mind… you save .21 seconds. I guess you win some, you lose some.

Later, I learned about the cloud database service MLab, which saved my application’s server from having to host the data itself. This ended up being very useful when porting my app to Heroku from its original environment at Digital Ocean.


I hope this overview of Surf Status served you as you engineer web apps on your own. Needless to say, there are several opportunities to improve this NodeJS-based web application.

If you have any suggestions or potential pull requests for the surf report, I’m all ears. The code repo is publicly available and ready to be forked. ;)

Aloha.

Show your support

Clapping shows how much you appreciated Taylor Ho’s story.