The Misery & Joy of Building Another Reddit Book Scraper

Jacob E. Dawson
23 min readOct 4, 2017

--

#EDIT — May, 2018 — I shut down ReddReader but I hope you can still learn something from the process I detailed below!

Before We Start

To save some people some time, there are a couple of things that I will (and won’t) cover in this article:

  • This isn’t a ‘start-to-finish tutorial’ — it’s an in-depth look into the process I went through while building Reddreader; a web-app that scrapes Reddit for book mentions and displays them based on subreddit categories each week. Hopefully you’ll learn from my processes & mistakes, but it’s not a copy-paste exercise.
  • I’ll share some code / techniques I used, but the main repo isn’t public at the moment. The package.json is at the end of this article if you want to know exact versions of each dependency.
  • The main tech stack used is MEAN w/ Angular 4.4 (Universal), so it’s JavaScript all the way down!
~60 hours of keyboard slapping later…

Who Am I?

I’m Jake (https://www.reddit.com/user/jacobedawson/), a Sydney-born Front-End Developer for Studio 3T in Berlin, Germany. I’ve been coding for 4 years, and I’m “self-taught” (no CS degree or formal training). On a scale of 1 to Wes Bos, I’m maybe around a 3, so please take any advice I share with a grain of salt :)

Yes, This is a HackerNewsBooks Rip-Off !

A few months ago there was a popular post on IndieHackers about a website called HackerNewsBooks, where software engineer Ivan Delchev explained how he coded an app that scrapes Hacker News for book references and as a result was making around $300 USD per month in revenue — not enough to retire on for sure but still a pretty nice chunk of passive pocket money. Since I was between projects and I’m always looking to improve my coding skills by building stuff that is just a little bit beyond my ability, I thought that this would be a great opportunity to apply the concept to Reddit, since I spend a tonne of time on the site and love the nuggets of recommendation gold that pop up every day.

Copy, of a copy, of a copy…

Apparently I‘m not the first person to try to make a Reddit book scraper :) I’m actually glad that when I first decided to build this project I didn’t look for other similar websites, or I probably would have ended up being demoralized and moving on to something else. I didn’t notice the other sites until a couple of weeks later, and by that point I was already invested in finishing the project. I’ve realized that when you decide to build something and discover that someone else has already done something similar, you can end up thinking ‘what’s the point?’. Well, if the project is gigantic, then it pays to look for competitors in advance, but if it’s just a side project you’re better off just sticking to the goal regardless of similar products. Once you’ve finished you can look around and see how other people executed the same concept.

Here are a couple of links to those other projects if you’re interested to see the different permutations of a Reddit book scraper:

http://reddittopbooks.com

Why I decided to do this

I’m a fan of trying to maximize return on effort, and in terms of learning to code, nothing has helped me internalize lessons & techniques as much as coding side projects that I care about. An important part of this approach is understanding that it’s OK to build something ‘small’ as long as it’s something that you care about — it’s really key that you have a passion for the project, because there are definitely going to be times when it’s honestly just not that fun — times where you’ll be bug-hunting for a few hours, or writing code & then realize that in order to make the project work as intended you have to tear that code apart and re-write it. You’ll only have the ability to persist through these slumps if you actually care about the result, which is why I would recommend to every beginner that you limit the number of ‘todo apps’ you build while learning. Tutorials are fantastic resources, but building something from a tutorial will rarely give you the excitement & reward of turning something from an idea in your mind into a working piece of code.

Planning the project, deciding on tech

The first things I did after reading that article was outline the approach that I thought would be needed to achieve this in my current stack (MEAN), and I visited the HackerNewsBooks (HNB) website to try to gain some more insight. I also sketched out what I thought would be the main challenges, including this list:

  • Finding subreddits with enough book submissions
  • Parsing / searching the text
  • Creating a suitable DB structure
  • Setting a criteria for whether a book is OK to be put on the list
  • Deduplicating results
  • Make sure link is for an actual book (and not another product)

Writing down challenges helps not only to clarify the types of problems that you’re likely to face, but it also forces you to search for technology that will handle those challenges, rather than just selecting whatever the hot new JS library / framework is.

In addition to those challenges, I also made a list of requirements:

  • Website with updated lists
  • Weekly Newsletter
  • Affiliate Links
  • Server
  • Basic Marketing
  • Google Analytics
  • Mobile Friendly

I didn’t go into great detail while making the list of challenges & requirements — even one-line helps me start planning, and I’d recommend even just scribbling something down on paper before you start. Once I had both lists I created a new list in Wunderlist for basic progress tracking:

Delicious strike-throughs

Scraping Reddit (Kind Of)

While Ivan had used Python to build HNB, one of the first things I had to do was decide on how to gather data from Reddit with JS. There were a couple of paths I could take here — using a combo like Cheerio & Request to request pages and then parse the HTML on the server, or use the Reddit API to pull in more structured data and then scrub it. While there is a really great Python wrapper of the Reddit API call Praw, I wasn’t sure if there was something of similar quality for JS. Luckily I found Snoowrap, a JS wrapper that offered a clean hook into the Reddit API and some great documentation. While pulling in data via Snoowrap might not technically qualify as out-and-out web scraping, each block of post data still had to be scrubbed & parsed for links, so that’s my story and I’m sticking to it.

Using Snoowrap’s “getSubmission” to fetch the data of a single post

Ivan also mentioned that he had used ElasticSearch as part of his process, but I wasn’t quite ready to go down the path of wholesale scraping & parsing massive amounts of data — my main goal was to grab the top subreddit posts, scan them for Amazon links and then gather the successful results into a MongoDB database. The first step was to hit the Reddit API via Snoowrap, then check each post for book links:

Refactoring will probably involve using Ramda’s compose to combine functions into one call

As you can see in the image above, there is plenty of room for refactoring the code, but I think it’s important to adhere to the following rule, attributed to Kent Beck:

“first make it work, then make it right, and, finally, make it fast.”

To parse the results for Amazon links I just use a simple RegEx that includes the smile.amazon & amzn variants, then scrub those links for random characters that were showing up:

the linkCleaner() function is so ugly I refuse to show it

Amazon Affiliate Program / Product Advertising API

In order to earn referral revenue when a book is sold via Amazon I first had to sign up to the Amazon Affiliate Program. The process of signing up isn’t too lengthy, and once you’re done you receive a URL that you can reference in links to products on your site. This is the same thing that HNB is doing, so for the first phase of this project that’s what I went with.

Since the project is running on Node, I looked around for a JS wrapper for the Amazon Product Advertising API, and found a nice one: https://github.com/t3chnoboy/amazon-product-api

With that I’m able to look up a product with an itemID (which I parse from the Amazon links in the ‘successful’ posts), and then return the results that contain an ISBN, to make sure that Reddreader doesn’t fill up with links to USA-flag running shoes and glow-in-the-dark condoms, or vice versa.

Design process & inspiration

Since I’m not a designer, when I’m thinking about an app design I like to consult Dribbble to see what kinds of things much more talented people have created. I’ll generally have an idea about the look & feel I’m going for, but IMO it pays to evaluate professional approaches. The cool thing about Dribbble is that you can collect ‘buckets’ for design inspiration. I like to find several designs that fit the rough idea I have in mind and save them to a project bucket. Later on, when I’m coding the design, I’ll refer back to these to keep my spacing / color balance & typography in check. In some ways it’s like getting a designer for free — as front-end developers it’s our job to replicate assets given to us by a designer — it’s not too much of a leap to borrow ideas from designs online in order to make sure your side projects look tasty.

I get buckets

For mocking up designs I usually first sketch something out on paper, and then either code the layout directly in the browser or do a very basic wireframe in something like Balsamiq. For Reddreader I tried out MarvelApp for the first time, and actually found it pretty easy to use and very clean. Other options include InVision, Zeplin & Sketch, although the Windows crowd doesn’t get the option of using Sketch, unfortunately.

Having a mockup — whether it’s on paper or online — can really speed up the process of building a layout, since you can ‘chunk’ your design into pieces that basically tell you what components are going to be needed and give you a general idea of the CSS rules / techniques that you’ll be using for each segment.

A first draft of the mobile layout built in MarvelApp

Even though I always create some kind of mockup, I never build a full version of the layout in Photoshop & co. I like to code designs directly into the browser, as I get feedback straight away on what the user is going to see, right down to the exact dimensions & resolution. Chrome Dev Tools is amazing for this, since you can use responsive & device views as well as different device-pixel ratios. One thing I’ve also learned is to get the basic building blocks up as quickly as possible before fine-tuning it. Below you can see a super rough early layout built on top of Bootstrap 4:

Rough Bootstrappy layout, colors be damned.

A tool that I also really appreciate is a Chrome extension called Pesticide. Pesticide overlays a grid on top of your website, enabling you to see how each block is being created and fits together. In combination with the Dev Tools responsive layout it’s like wearing X-ray goggles while testing your coded designs.

Pesticide helps me visualize & debug layouts

Building the Angular App

I’ve been using Angular for the past year, having jumped on board once version 2 came out. Now we’re almost up to version 5, and while sometimes it can be hard to find up-to-date documentation, in combination with the Angular CLI development is generally pretty smooth. The Angular CLI abstracts away a lot of the wiring that used to be done manually, such as importing components into modules and writing boilerplate code. Once you start using the CLI for development I can’t imagine anyone wanting to go back to writing all that setup code manually. I also keep the default Webpack setup since I’m not up to the level of needing a very specific build process beyond the standard SCSS to CSS / minification / prefixing pipeline. Still, it’s worth noting that the Webpack config can be ‘ejected’ for those of you that want to get your hands dirty.

Writing components is a breeze with Angular, and it’s nice to have everything style-related scoped. I write some global styles & variables in a Sass folder, create some component partials for things like cards & buttons and for everything else related to styles I write them in the component’s .scss file (which later gets compiled into regular CSS).

When I start a project I’ll just use hardcoded values as placeholders, and then bit by bit I’ll replace the hardcoded values with dynamic data. Having the hardcoded value to begin with also helps with layout development, because you have an idea about how things will be presented when they’re filled up. Once I had the core of Reddreader built I was ready to start pouring data into the components, and that’s when Augury really comes in handy. Augury is a Chrome Extension that enables you to see the data & services that each component has access to. This is helpful if you run into bugs or aren’t seeing the data presented in the way you expect it to be:

Augury is an indispensable development tool for Angular

Express Server / API

When I started planning Reddreader I knew that I would need an API of some kind, and initially I thought about using GraphQL, but when I started writing down requirements I realized that I only needed around 3 separate API calls, and since I have experience with Express / Express Router I decided to stick with that, leaving GraphQL for another project where I will be able to delve a bit deeper into it.

I’ve used Express as my Node server framework for multiple projects, and it’s so widely used that there’s a lot of documentation & Stack Overflow posts in case you get stuck somewhere. I also haven’t tried out the serverless paradigm yet, but I have a project coming up where I’ll get to use Amazon Lambda so yeah, Express all the way.

Reddreader actually doesn’t require too much going on underneath the hood, with around 3 API calls, some Mongoose models and the Reddit / Amazon parsing logic. All included I think the server-side code is ~600 loc.

Hooking Up with Services

On the front-end, I use a couple of Angular services to interface with the API. I actually really enjoy the separation of concerns while using services, and it also ensures that intra-component communication is straightforward. While I did use the EventEmitter for a couple of simple tasks, the primary architecture involves the Home component (main container) interfacing with a single Post service, which uses the HttpClient to request data from the server via API calls:

post.service.ts

Cron Jobs for Automated Server Tasks

To this point the functions that called the Reddit API were being triggered manually, which is fine for testing and getting everything set up, but wasn’t going to work moving forward. The great thing about coding is being able to assign routine, repeatable jobs to the computer, and that’s exactly what a cron job is good for. I hadn’t actually worked with cron tasks within NodeJS before, but after a quick google search I found a nice package called node-cron (https://github.com/kelektiv/node-cron), which uses standard cron syntax combined with standard Node callback functions:

var CronJob = require('cron').CronJob;
var job = new CronJob('00 30 11 * * 1-5', function() {
/*
* Runs every weekday (Monday through Friday)
* at 11:30:00 AM. It does not run on Saturday
* or Sunday.
*/
}, function () {
/* This function is executed when the job stops */
},
true, /* Start the job right now */
timeZone /* Time zone of this job. */
);

The cron syntax takes a little getting used to, but essentially you end up with a pattern that tells node-cron to run a particular function (or set of functions) at a certain time — right down to the minute (or even second depending on which cron system you use).

 ┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
(7 is also Sunday on some systems)
│ │ │ │ │
│ │ │ │ │
* * * * * command to execute

This was also perfect because with certain APIs (like Reddit) you can run into rate limiting issues — when you hit the API too many times over a certain period of time it might reject your request — so I broke up the array of subreddits that I wanted to hit into blocks, with each one separated by a chunk of time, so every 24 hours I can grab new posts and add them to that week’s list:

Breaking the API calls into chunks to avoid the rate limit

Let’s Get Live!

Many thumbs up to Jason Lengsdorf and his fantastic guide to setting up Let’s Encrypt on a Digital Ocean server. Honestly, that guide has helped me out so much, making the set-up of a free SSL cert bearable and actually pretty fun. Following along with the guide takes you through setting up a server using Ubuntu, creating privileged users on Linux, adding SSH keys, hardening the server, setting up NGINX and Let’s Encrypt and plenty more. I can’t say enough good things about that blog post, it’s a life saver.

For the MongoDB database I decided to give MongoDB’s own ‘Atlas’ a go — they have a 500mb sandbox for free that offers sharded clusters and some nice dashboards. I hooked up to the remote db via Studio 3T, since it’s the best MongoDB IDE on the market. And they pay me because I work for them #biased

What I find pretty cool is developing on a local database and fine-tuning a structure, then being able to copy the MongoDB collections directly to the remote db and hooking things up in a couple of lines of code. All of this went off without a hitch and I was ready to start testing on the live site!

Back to the drawing board

Well, that lasted for a couple of days. It didn’t take long for me to realize that Reddreader wasn’t crawlable for SEO purposes, and that the structure I’d chosen for the database wasn’t actually very flexible. I had actually totally overlooked adding server-side rendering, and also realized that the URL / API structure wasn’t going to cut it moving forward as I scraped Reddit every week and was also hoping to scrape it every day. I was going to have to rewrite the database and convert the app to Angular Universal.

Moving to Angular Universal for SSR

Well, this is the state of JS, 2017. We went from server-side rendered code, to client-side SPAs, and now some kind of weird offspring that entails server-side rendering to appease the Google god and SPA client-side rendering for everything after the initial page load. Here is a technical diagram of the current situation:

I’m getting a persistent unhandled Promise-rejection error, Jerry

While sometimes Google can crawl SPAs, that doesn’t mean that Twitter or Facebook previews will work, and I’ve heard plenty of stories of websites that just show up as “…loading” in the search results. There are services such as Prerender.io that offer to render your pages and serve them to Google, but that’s a paid service (above a threshold) and leaves you beholden to yet another 3rd party, so I decided to incorporate SSR myself.

The current solution to implementing server-side rendered, fully SEO-able websites in the Angular ecosystem is Angular Universal. It works, and I guess it’s a start, but essentially we end up adding a bunch of extra workarounds on top of our SPA in order to make it work…kind of like a website from 2007. On top of that, as much as I generally enjoy developing with Angular these days, the documentation isn’t exactly best in class, especially the Angular Universal docs, which (as of writing) still reference Angular 2. It strikes me as kind of strange that a company as big as Google has such underwhelming docs for their flagship framework, so learning how to implement SSR involved some study on Udemy and a fair bit of searching.

From Typescript Back to JS

An issue I ran into straight away was that I started this project with the idea of running Typescript fullstack, and used this pretty cool repo as my backbone: https://github.com/DavideViolante/Angular-Full-Stack. However, when I tried to implement SSR, I started running into errors involving importing modules on the server side. I probably missed something there, but I couldn’t work it out. In the end I was at an impasse so I had to de-Typescript my server, and it ended up being vanilla JS on the server.

In the end, adding Angular Universal is not such a big deal. You can get 95% of the way there with basically any standard Angular 4 / CLI project by following this guide: https://github.com/angular/angular-cli/wiki/stories-universal-rendering. They take you through the process step-by-step, and essentially you split your app into 2 modules and then tell the server to serve one on the initial page load with all of the pre-rendered HTML before transitioning to the client-side render on subsequent page loads.

Because you create 2 versions of your app, you end up bundling 2 versions — usually one into the standard /dist folder and one into a /dist-server folder. You have to tell the server (Express, in my case) to serve up the bundle by name, which is kind of a hassle because every time you build your bundle it has a different hash name. Yuk. One of the handy tricks that I came across online recommended how to avoid this by removing output hashing from the server bundle, removing the hash name. Here’s an example of how I removed it in my package.json build command:

This saves you having to retype the hash number every time you build.

You Down With SEO?

Getting server-side rendering up & running is just the beginning when it comes to SEO. Next steps were adding Google Analytics (via Google Tag Manager), adding a sitemap.xml file to the site and creating custom titles & descriptions for every page, which can be done programmatically with Angular’s Meta & Title services via the Platform Browser module. Not to beat a dead horse, but once again the Angular documentation is sometimes so sparse that it’s not easy to find out the canonical way to achieve certain results, especially when you add SSR into the mix.

Having worked in SEO before, this is the boring but necessary house cleaning that has to be done in order to make your site presentable to search engine spiders (and discoverable in web searches). On top of that, you can (and probably should) add Open Graph data for the big social media players, including Twitter, Facebook & Google.

Each network has an Open Graph validator to double check the way it should appear

Duplicates, duplicates & goddamn duplicates

Here’s a little something to have a good giggle at, if terrible, awful programming-related nightmares make you laugh. MongoDB has had a major bug about preventing duplicates based on unique indices within the same parent document registered for 7 years and counting:

Me too, tony kerz, me too.

Although it’s been argued that this isn’t a bug and is just how MongoDB works, this issue caused me several hours of Googling, StackOverflowing and swearing before I bit the bullet and refactored my database. Good times. I was getting duplicate posts all over the joint and it was kind of killing my vibe.

The main takeaway here is that MongoDB is great at certain things, like storing unstructured documents, and not so good at doing other things that people generally might associate with databases, like table joins for example. MongoDB has improved a lot and there are methods that can simulate these things (like Populate or the Aggregation Framework), but it drives home a point that I feel I neglected in the beginning — know why you’re using a certain technology, and what its limitations are. This includes just knowing what something is good at versus what can be done only with additional headaches. I learned some more things about how MongoDB works, but I’m also going to take more time choosing a db next time, depending on what I want the project to achieve, rather than just selecting MongoDB by default.

After trying to force MongoDB to recognize unique indices for subdocuments within a single parent, I relinquished and moved each post into its own collection, referencing each one via ObjectID within a list using the ‘ref’ attribute and simulating a join using the Populate method:

Nested populate methods are pretty awesome

This approach actually turned out quite well, as I can enforce unique ObjectIds in each collection, and even compound indices like (Subreddit + Week + Year) to make sure that I avoid duplicating content. There is still the occasional issue that multiple posts in the same subreddit reference the same book, but that’s actually an organic result so I’m not trying to remove those at the moment.

Here is how the document hierarchy ended up being arranged:

  • Each week there is a single Catalog that contains an array of Lists e.g. results for /r/Fantasy.
  • Each List contains an array of Posts for that subreddit.
  • Finally, each Post contains an embedded array of Links e.g. a single book result, including ISBN, Amazon link, book thumbnail, author & title.

Below is a view of the 3 separate collections in my MongoDB database; Catalog > Lists > Posts:

1. Catalog > 2. Lists > 3. Posts

There are some other things that are occasionally a bit finicky with MongoDB that are just part of the NoSQL format. A lot of the time additional transformations of the results have to happen once the db has returned raw data. For example, sorting a list by the number of posts isn’t possible from within a nested Populate call unless an array length is appended to the document. Therefore, I resorted to using a sort on the results before sending them back from the server to the front-end:

Live! Part 2.

After restructuring the database & adding SSR with Angular Universal, the site was ready to go live again. This time it went well and the site was functioning without errors. However…

Random Errors & Weird Browser Sh*t

I no longer expect any project to work 100% perfectly on the first go. My innocence was lost many years ago, when my newly home-assembled $2000 custom PC fried itself into oblivion on first startup because…I somehow didn’t add 6 little screws worth 4 cents to separate the motherboard from the case. Luckily, these days most errors are less potentially fatal and pretty easy to fix.

Oh sweet, thanks Safari!

Although TDD is definitely a smart play, in the front-end world there are some errors that you’re only going to find ‘in the wild’. Take the above error for example. After some googling it turns out that Safari in particular needs a polyfill due to Angular’s internationalization solution being unsupported. You can’t make this sh*t up.

To help locate most of these errors before trying to generate traffic, I like to use a cross-browser service like BrowserStack. There are plenty of others, and it’s probably possible to set up your own on a server, but for quick & easy testing across devices, OS’s and browsers an online service is well worth the price.

The Horror of Flexbugs on Safari is not Good For You

Caniuse is also a great (free) website to consult while deciding which browsers your project should support — these days I find most errors result either from some Flexbox incompatibility or various JS problems that can usually be solved with a polyfill.

Setting up a Mailchimp Newsletter

One of the main subjects of the IndieHacker interview with Ivan was the newsletter that he had set up to grow his user-base. In order to do that he had built a newsletter form and was using a pretty aggressive pop-up modal to try to collect emails. I think he’s removed the pop-up since then, and I’ve decided against an automatic pop-up because personally I hate them with a passion. Let me see what the value is before shoving your junk in my face, good sir!

I figured for this stage I would either use SendGrid or Mailchimp for collecting emails, and I ended up going with MailChimp. Using the MailChimp API (https://www.npmjs.com/package/mailchimp-api-v3) I hooked up the front-end via a newsletter service to send an email input to the server, and then push that to a Reddreader newsletter list I created in the Mailchimp dashboard.

The newsletter is collection is working, but I’m actually still not seeing a confirmation email pushed to users — signing up for the newsletter just adds a user’s name to the list without sending a follow up. The plan is to use the newsletter group to send a curated email of the best books that show up on Reddreader each week, along with descriptions, reviews & links. You can sign-up to try it out here: https://reddreader.com/newsletter

Next steps

  • There’s still some cleaning up to do with the code, and also a few inconsistencies I’d like to improve on the site, so the next steps are to refactor & optimize, add some flourishes like animations & additional book pages, and continue looking for subreddits that have a good supply of book recommendations.
  • I’d like to start parsing for Goodreads and other links.
  • I still need to work on dynamically updating page descriptions based on the subreddit / week selected.
  • Integrate a ‘scroll-to-top’ button that plays nice with SSR
  • I’d like to add more information for each book, such as reviews from GoodReads & similar book review websites
  • Advanced-level stuff would be to find all book mentions, regardless of whether they are linked or not, but that’s a fair way beyond my ability right now. I’ve thought a bit about how it could be done — you’d probably build up a collection of phrases that tend to precede or follow book authors or book titles, like “written by” or “was a good book”, and then grab the block of text around that phrase. After that you could probably run the block of text against a database or API of book titles or authors until you find a match, then search the Amazon API for books that match the results. I’m not sure if that’s even the right approach.

Thanks for Reading!

One thing that was really strongly confirmed to me while building Reddreader was how much the coding community relies on the free sharing of knowledge. Making code do stuff that works is only possible with the help of others, and between StackOverflow, Reddit and fantastic blog posts like Lengsdorf’s Digital Ocean / Let’s Encrypt guide or Josh Beam’s plain-English explanations of MongoDB’s Populate I managed to piece together an understanding of how to make these disparate technologies play nice. If you’re a beginner coder or even a fresh professional still trying to make sense of this crazy ecosystem, I’d urge you to share the lessons you learn when the time is right.

Thanks for taking the time to let me share my experience, I hope you were able to learn something — whether it’s what to do or what not to do!

Cheers,

Jake

/*===== Extra Stuff ===== */

Reddreader Package.json

Other Tools & Extensions I Used

Wunderlist — Smooth & easy todo list. I use Wunderlist every single day.

Strict Workflow — Pomodoro timer + distracting website blocker

Pesticide — A chrome plugin for quickly debugging css layout issues by toggling different colored outlines on every element.

Chrome Dev Tools — Press F12 for the Goodness

BrowserStack — Multi-device & OS testing online

Stylish — Custom themes for any website. Roll your own. Dark everything for midnight coders.

ColorZilla — Advanced Eyedropper, Color Picker, Gradient Generator and other colorful goodies

Vimium — Surf the web with keyboard only. SO helpful once you get used to it.

Toggl — My go to time-tracker, for freelancing & side-projects

Github — Make sure you use version control, kids, don’t end up like this guy.

Augury — Extends the Developer Tools, adding tools for debugging and profiling Angular applications.

MarvelApp — Turn sketches, mockups and designs into web, iPhone, iOS, Android and Apple Watch app prototypes.

Dribbble — Show & Tell for designers

Mailchimp — Newsletter automation

ConEmu — A sweet Windows terminal that offers multiple integrations (Powershell, GitBash, etc) & tabbed browsing

Visual Studio Code — A free, fantastic & super-extensible code editor. After trying Webstorm, Sublime & Atom, VSC won my heart.

MDN — Very complete web technology documentation

Social Links & Projects

Follow Me on Twitter

Check out my Github

Troll me on Reddit

Practice Technical Interviews on Jumpjet

Find Books on Reddreader

--

--