My baby steps with Go — Building a basic web crawler with Neo4j integration

Mahjoub Saifeddine
CodeShake
Published in
6 min readJun 15, 2020

My experience writing Go as Java developer

Well, I’m a Java developer, I recently started learning Go and I’m enjoying most of its features. For experimental purposes, I decided to create a small web crawler. Why a web crawler? Well, because it’s complex enough to provide some good examples of parsing text, handling events, using the standard API and relying on 3rd-party APIs.

The Goal:

The goal of this post is to create a basic web crawler that captures your site structure by getting all its internal links before storing them in Neo4j database.

So the idea is very simple and it follow these steps:

  1. Get request for a given URL
  2. Parse the response
  3. Extract all internal links from response
  4. Store the extracted links in Neo4j
  5. Repeat the 1st step with each link until exploring all the site

Finally, we’ll use Neo4j Browser to display the output graph.

Prerequisites:

This post is accessible for Go beginners (just like me). I’ll provide a helpful link each time a new concept is introduced. For Neo4j, basic knowledge of graph oriented databases would be helpful. I’m assuming that you have both Go and Neo4j installed on your local machine. If it’s not the case, please follow the documentation instructions in Golang and Neo4j websites.

Creating the crawler:

Now that we have all we need to start coding. Let’s start.

The main function:

Go is a scripting language. Basically, all you need to run a program is a ‘main’ package and a ‘main’ function.

Now, let’s run it.

go run main.go

Alternatively, you can compile the file and run it manually.

go build main.go

Retrieving a single page from the internet :

Enough with the basics, it’s time to write some (not so) complicated code that helps us retrieve a specific page from the internet.

I started by declaring the main package and importing the required packages. Next, I declared a struct that will implement the ‘Writer interface. In the main function, you’re going to notice multiple variable assignment. Basically, the ‘http.Get’ will return the response and some error value if anything went wrong. This is a common way of handling error in a Go program.

If you take a look at the documentation you’ll find the ‘Writer’ interface with a single function. In order to implement this interface, we need to add a receiver function to our ‘responseWriter’ struct that matches the ‘Writer’ function signature. If you’re coming from Java, you would probably expect a ‘implements Writer’ or similar syntax. Well, this is not the case for GO since interface implementation goes implicitly.

Finally, I used the ‘io.Copy’ to write the response body to our response variable.

The next step is to modify our code to extract links from a given website URL. After some refactoring we’ll have two files.

This ‘main.go’ :

And the ‘retreiver.go’:

We can run this against a simple website:

go run main.go retreiver.go http://www.sfeir.com

Now we’ve made our first step to create the crawler. It’s able to boot, parse a given URL, open a connection to the right remote host, and retrieve the html content.

Getting all hyperlinks for a single page

Now, this is the part where we need to extract all links from the HTML document. Unfortunately, there’s no available helpers to manipulate HTML in the Go API. So, we must look for 3rd party API. Let’s consider ‘goquery’. As you might guess, it’s similar to ‘jquery’ but with Go.

You can easily get the ‘goquery’ package by running the following command:

go get github.com/PuerkitoBio/goquery

I changed our retrieve function to return a list of links of a given web page.

As you can see, our ‘retrieve’ function has significantly improved. I removed the ‘responseWriter’ struct because it’s no longer needed since the ‘goquery’ has its own implementation of ‘Writer’ interface.

I also added two helper functions. The first one, detect whether the URL is pointing to an internal page. The second one, ensure that the list does not contain any duplicated links.

Again, we can run this against a simple website:

go run main.go retreiver.go http://www.sfeir.com

Getting all hyperlinks for the entire site

Yeah! We made a huge progress. The next thing we’re going to see is how to improve the ‘retrieve’ function in order to get links in other pages too. So, I’m considering the recursive approach. We’ll create another function called ‘crawl ’ and this function will call it self recursively with each link given by the ‘retrieve’ function. Also, we’ll need to keep track of the visited pages to avoid visiting the same page multiple times.

Let’s check this :

Now we can call the ‘crawl’ instead of the ‘retrieve’ function in the ‘main.go’. The code will be the following :

Let’s run our program:

go run main.go retreiver.go http://www.sfeir.com

Implementing events listeners through Channels

In the previous section we saw that the fetched URL is being displayed inside the ‘crawl’ function. This is not the best solution especially when you need to do more than just printing on the screen. To fix this, basically, we’ll need to implement an event listener for fetching URLs through Channels.

Let’s have a look at this :

As you can see, we have three additional functions to help us manage the events for a given ‘retriever’.

For this code I used the ‘go’ keyword. Basically, writing ‘go foo()’ will make the ‘foo’ function run asynchronously. In our case, we’re using a ‘go’ with an anonymous function to send the event parameter (the link) for all listeners through channels

Note: I’ve set the channel data type to ‘link’ that contains the source and target page.

Now let’s have a look on the ‘main’ function :

Again I used the ‘go’ keyword, this time for receiving the event parameter sent by the ‘crawl’ function.

If we run our program now we should see all internal links for the given website.

That’s it for the crawler.

Neo4j Integration

Now that we’re done with the crawler, let’s get to the Neo4j part. The first thing we’re going to do is to install the driver.

go get github.com/neo4j/neo4j-go-driver/neo4j

After installing the driver, we need to create some basic functions that will allow us to work with Neo4j.

Let’s create a new file called ‘neo4j.go’ :

Basically, we have three functions responsible for initiating connection to Neo4j with basic querying.

Note: You might need to change the Neo4j configuration to work with your local instance.

To create a ‘WebLink’ node we simply need to run the following query:

CREATE (:WebLink{source: "http://www.sfeir.com/", target: "http://www.sfeir.com/en/services"})

Once the nodes are created, we need to create relationship between them by running the following query :

MATCH (a:WebLink),(b:WebLink) 
WHERE a.target = b.source
CREATE (a)-[r:point_to]->(b)

Now, let’s update our ‘main’ function.

With the usage of three functions declared in the ‘neo4j.go’ our program will initiate a connection to neo4j, subscribe to ‘newLink’ event to insert nodes and finally update nodes relationship.

I used the the ‘deferkeyword to defers the execution of a function until the surrounding ‘main’ function returns.

Let’s run this for the last time :

go run main.go retreiver.go neo4j.go http://www.sfeir.com

To check in the result on Neo4j, you can run the following query on your Neo4j Browser:

MATCH (n:WebLink) RETURN count(n) AS count

Or this query to display all nodes:

MATCH (n:WebLink) RETURN n

Et Voilà! The result after running the last query :

It’s pretty, isn’t it?

Conclusion

Through this post we explored a lot of features of the Go programming language including multiple variable assignment, implementation interfaces and channels and goroutines. Also, we used the standard library as well as some 3rd party libraries. Thank you for reading it. The code source is available on my GitHub.

--

--