Create an Amazon wish-list from the Facebook pages you “like” (with Zero server)
When my friends ask me what I want for my birthday I usually draw a blank.
Then I start to think, it lasts about three months, and when I’m done thinking, my birthday is over, I have no friends left, and I use my own money to buy self-help books on Amazon Prime.
But there is one place where I’ve spent literally hours of my life creating a catalog of the things I like: Facebook. When I really like an artist, a movie, a product or anything really, my first reaction is to go to Facebook and “like” his/her/its page immediately.
If only there was a place where I could display a list of my Facebook “liked” pages, and search for Amazon products related to that page!
That’s how came the idea for this app, which consists of four parts:
- Fetch your “liked” pages from Facebook
- Fetch products related to a keyword from Amazon
- Create the API
- Build the UI
Zero is a web server to simplify web development. Contribute to remoteinterview/zero development by creating an account…
I stumbled upon Zero server’s Github page and the idea sounded too good to be true, and it was a perfect match for my project, so I decided to take it for a test drive.
To spoil you a little, some of its features are really clever and work perfectly, but my personal experience with it has been riddled with bugs, memory crashes, very long starting times, and to be honest I don’t think I will use it again before it matures a little.
Are you local?
An important lesson that I learned from building websites with Gatsby, is that building static websites / apps makes life 100% more tolerable.
With this project, the only end user is you. You want to access your own personal data, from your own locally-launched server, from your own command line. You don’t need to worry about deployment, security, bugs, best practices or anything.
If your application sucks, you’re the only person that will suffer.
Fetching your liked pages from Facebook
To fetch your liked pages from Facebook, you need a Facebook account and an application.
Create a Facebook application
We need an application to be able to query the Facebook Graph API.
The Facebook Graph API is the endpoint where you can access any public information that Facebook has on you or anyone, depending on your app’s privacy settings.
If you don’t already have a Facebook application, you can use the following tutorial to create a new one:
Creating application from Facebook Developers
To create a Facebook application, connect to the platform Facebook Developers. A developer account is required to…
Simulate your query
Before starting to code, we can use the Graph API Explorer, which is a tool developed by Facebook to query their API.
We’re going to create a script that we will run just once, to fetch our own “likes” and save them to a local database, so we don’t have to manage access tokens from our script, we can just generate one from the Graph API Explorer and hard-code it into our script.
On the “address” bar, you can see “me?fields=id,name”.
This is the place where you define the kind of properties that you want to retrieve for the user — “me” in this case, which means you, “me” will be replaced with you Facebook user id internally.
For this project, we need to fetch the name of the pages you liked, so we have to consult the Graph API Reference to find the name of this property.
Graph API Reference - Documentation - Facebook for Developers
This is a full list of the Graph API root nodes. The main difference between a root node and a non-root node is that…
A quick search for “likes” brought me to this particular property:
Sounds about right.
First we have to update the Permissions that we will use for this request:
Then we have to update the “address” bar with the “likes” property.
And trigger the Submit button:
Now we have our access token, and we know the parameters that we must send to the API to retrieve our user likes.
We have all the information we need to start to code!
Query the API from your script
We will create a small script that does the same query, but programmatically:
This is what the final script looks like. We’re using the “fb” npm package to query the “me/likes” path, which is the equivalent of the “me?fields=likes” query that we saw earlier.
Facebook’s pagination uses cursors, as long as you haven’t hit the last page of results, it sends an “after” value in “paging.cursors.after”, which is a token that you must send with your query to display the next page.
So in a while loop, we concat all the elements returned by the “me/likes” query until the query returns an empty array.
Fetch products related to a keyword from Amazon
Accessing the API
It takes a couple of minutes or less to get access to Facebook’s Graph API.
To get an official access for Amazon’s “Product Advertising API”, you must send an application, that will be reviewed by a real Amazon employee made of flesh and blood, with answers to “How many millions of unique visitors does your website get daily?”, and “Are you blood-related to Jeff Bezos?”.
After being rejected twice, I decided that I would abandon the idea and find a better way to spend my free time.
Simulating the query
The first thing we will do is to go to amazon.com and to search for …erm … something.
We can remove the “&ref=…” bit and the search still works. So we know that we can search for anything by prefacing our query with “https://www.amazon.com/s?k=”.
We need the details about each result, so we use a super-secret tool called the inspector (right-click / Inspect in Chrome).
There’s lots of information but if you’ve been watching the Matrix movies many times in a row like I have, you can find out how to access the interesting data pretty easily, with CSS selectors.
Walking on the wild side
Since Amazon won’t let us use its API, we have no choice but to “scrape” the search results.
According to Google, “scraping” means “spreading (butter or margarine) thinly over bread”, but in this case I think the right definition is “copy (data) from a website using a computer program”.
There is an npm package that does this easily: scrape-it.
🔮 A Node.js scraper for humans. Contribute to IonicaBizau/scrape-it development by creating an account on GitHub.
So using the search URL and the CSS selectors that we have found in the previous chapter, we can build the following script:
It’s so simple it speaks for itself: we name the fields we want to retrieve image, title, price, link and stars, and for each of them we specify the CSS selector and the attribute of the matching node.
The result looks like this:
Creating the API
Now that we have our two libraries (for fetching Facebook likes and Amazon search results), we can start build our own API endpoints, that will be consumed by our user interface.
With Zero server, creating a node.js server script is incredibly easy, if you create /mypath/myscript.js for example, you will be able to run it from http://localhost:3000/mypath/myscript.
So we will create two API endpoints that will use the two library scripts we just created. For the record, here’s what my project folder looks like:
My low-level library files are placed in /lib/, my API scripts are placed in /api/ — Neat and clean, easy to understand.
Ignore the other files and folders for now.
As you can see, both files have a pretty similar behaviour.
We have a get/load function that loads a value from the database if it exists, calls the library if it doesn’t, saves the value into the database and returns the result. The goal of the database is to reduce the calls to the source website as much as possible.
Facebook, for example, has a pretty strict rate limit so if you didn’t use a database to cache the results, you would get errors after only a few calls.
The Amazon script handles the query in the URL as well.
For the database I used keyv, which is a key/value storage library that I really recommend, since it’s very easy to use, has a single purpose and does it extremely well. For this project I used an SQLite database but it supports a lot of other storage strategies.
Building the UI
All aboard the Rant Express
Again, some things are just magic with Zero server, for example, if you want to create a React front-end for your application, you just have to create a .jsx file. Then re-launch the server. Then wait for it to re-build. Then to wait for it to fetch the dependencies. Then to manually add the dependencies because it crashed. Then to launch it ag… Oh, it’s working.
…Then you launch the URL in your browser. Then it fetches the dependencies again ohgodwhy. Ok, it says it’s ready. Oh, it says it again. Let’s move the mouse on the wind… It’s rebuilding.
I’m exaggerating a bit, but not so much, when it’s ready it could be a real game-changer because it simplifies some of the processes involved in web development so much, but it’s just too instable for me to recommend it this early.
Choice of weapons
Rant over, for the UI I chose Bulma which is a pretty popular UI kit, and more specifically the react-bulma-components package by couds:
React components for Bulma framework. Contribute to couds/react-bulma-components development by creating an account on…
Side note: I spent too much time trying to figure out how to implement FontAwesome in React the way FontAwesome recommends.
The real solution here is react-icons, which just works the way you would expect.
svg react icons of popular icon packs. Contribute to react-icons/react-icons development by creating an account on…
To consume the API, all I had to do was to use the native node fetch function, like so:
const resp = await fetch('/api/facebook-likes');
const likes = await resp.json();
I believe there isn’t much more I could comment on, you will have to try the result by yourself! The source code is available on Github:
Thank you for reading, feel free to comment if you have any question!