Github Events Analysis with Neo4j

On July 22, Github announced the 3rd Annual Github Data Challenge presenting multiple sources of data available.

This sounded to me a good opportunity to use their available data and import it in Neo4j in order to have a lot of fun at analyzing the data that fits naturally in a graph.

As I work mainly offline or behind military proxies that do not permit me to use the ReST API, I decided to go for the Github Archive available here, you can then download json files representing Github Events on a daily/hour basis.

Be aware that hourly files represent approximately 8000 events !

Each file contain lines of Github Events encoded in JSON. There are a bunch of EventTypes like ForkEvent, IssueCommentEvent, ReleaseEvent, DeleteEvent, WatchEvent, and so on …

You can find the complete list of EventTypes and Payloads here.

For my part, I will use 4 EventTypes : ForkEvent, PushEvent, PullRequestEvent & IssueCommentEvent.

This maybe looks little but if like me you import all these 4 events for a period of 1 month, you’ll end up with about 2 million nodes created and 4 times more relationships.


Importing the data

I’ve created a simple library in PHP that parses these data files and import the Events into the database.

Each Event is handled with a dedicated EventTypeLoader that produce the Cypher Query for the import.

The library is available here : https://github.com/kwattro/gh4j

I would like to clear up that the way events are imported are proper to my intentions of data manipulating, you can always modify the code to reflect your needs.

Also, be aware that it is not compatible with the ReST API events as the payload are totally different.

Informations specific to all Events

There is Common Payload for all Events, the User creating the event and the EventType, each Event will create a Node having the name of the event as a Neo4j label.

The PullRequestEvent

By examinating the PullRequestEvent Payload, you can recreate the complete schema, starting from the user creating the PullRequest ending to who is owning the Repository on which the PR is opened.

You can also guess when the PR is closed if it is a MERGE/CLOSE or a single CLOSE by looking at the merged_at key.

The PullRequestEventLoader will import the payload in the following schema :

PullRequestEventScheme

The Fork Event

Same here, ForkEvent Payload gives you some informations but not so much, for e.g. this does not give you the id of the newly created Repository (a Fork is also a repository).

So in order to have a uniqueness reference for matching/creating/retrieving forks, I decided to use html urls of the Fork/Repository as an identifier and this information is available in the payload.

Loading the ForkEvent will produce the following schema :

ForkEventScheme

The PushEvent

As the ForkEvent, the PushEvent does not provide quite useful informations, you can still build a little schema.

This will produce the following Neo4j Graph Schema :

The IssueCommentEvent

And the last one, IssueCommentEvent. There is a check in the code of the IssueCommentEventLoader that detect if the Issue on which the comment is done is related to a PullRequest, this will when true create a BOUND_TO_PR relationship from Issue to PR.

Giving up the schema :


Let’s play !

Ok, the boring part of explaining how the data is imported is done. If you’ve uploaded a bit data for 1 or 2 days you can end up with already a really intersting graph.

Time is now to leverage Neo4j godness called Cypher and start having fun (means querying the graph) ☺

Basic queries

Who did the most events ?

MATCH (u:User)
MATCH (u)-[:DO]->(e)
RETURN u.name, count(e) as events
ORDER BY events DESC
LIMIT 1
+------------------------------------------------+
| u | events |
+------------------------------------------------+
| Node[268470]{name:"openstack-gerrit"} | 61 |
+------------------------------------------------+
1 row
115 ms

Which repository has been the most touched?

MATCH (repo:Repository)
MATCH (repo)<-[]-(touch)
RETURN repo, count(touch) as touchs
ORDER BY touchs DESC
LIMIT 1
+--------------------------------------------------------------+
| repo | touchs |
+--------------------------------------------------------------+
| Node[272039]{name:"robot_model-release",id:8718679} | 68 |
+--------------------------------------------------------------+
1 row
98 ms

Which repository has been the most forked?

MATCH (repo:Repository)
MATCH (repo)<-[:FORK_OF]-(f)<-[:FORK]-(event)
RETURN repo, count(event) as forks
ORDER BY forks DESC
LIMIT 1;
+-----------------------------------------------------+
| repo | forks |
+-----------------------------------------------------+
| Node[269686]{name:"jekyll-now",id:16599031} | 4 |
+-----------------------------------------------------+
1 row
63 ms

Which repository has the most merged PR’s?

MATCH (repo:Repository)
MATCH (repo)<-[:PR_ON_REPO]-()<-[:PR_MERGE]-(mergeEvent)
RETURN repo.name, count(mergeEvent) as merges
ORDER BY merges DESC
LIMIT 1;
+---------------------------+
| repo.name | merges |
+---------------------------+
| "trail_reporter" | 6 |
+---------------------------+
1 row
46 ms

As you can see, it is simply with Cypher as you just need to draw on a paper or whiteboard the path you want to return.

Ok we go a bit further now !

Interesting queries

How many Forks are resulting in an Opened PR ?

You just need to analyse the nodes and connections between the ForkEvent and the PR Event and translate it into Cypher.

Here is a visual representation of the complete Fork -> PR_OPEN scheme :

MATCH (u:User)
MATCH p=(u)-[:DO]->(forkEvent)-[:FORK]->(fork)
-[:FORK_OF]->(repo)<-[:PR_ON_REPO]-(pr)
-[:PR_OPEN]-(pullRequestEvent)<-[:DO]-(u2)<-[:OWNED_BY]-
(f2)<-[:BRANCH_OF]-(br)<-[:FROM_BRANCH]-(pr2)
WHERE u = u2 AND fork = f2 AND pr = pr2
RETURN count(p);
+----------+
| count(p) |
+----------+
| 2 |
+----------+
1 row
139 ms

What is the average time in seconds between a Fork is done and a PR is opened?

There is not much changes with the previous query, we just use the avg function on the difference between the PR Opening time and the Fork time

MATCH (fork:Fork)
MATCH p=(user)-[:DO]->(fe)-[:FORK]->(fork)-[:FORK_OF]->(repo)
<-[:PR_ON_REPO]-(pr)<-[:PR_OPEN]-(pre)-[:DO]-(user)
-[:DO]->()-[:FORK]->(fork)
WITH p, (pre.time — fe.time) as diffs
RETURN count(p), avg(diffs);
+-------------------------------+
| count(p) | avg(diffs) |
+-------------------------------+
| 6 | 133.33333333333334 |
+-------------------------------+
1 row
39 ms

We can extend the query to know the average time until the PR is merged.

What is the average time in seconds between a Fork and a PR is merged?

MATCH (pre:PullRequestEvent)
MATCH (user)-[:DO]->(pre)-[:PR_MERGE]->(pr)-[:PR_ON_REPO]->(repo)
MATCH p=(user)-[:DO]->(forkEvent)-[:FORK]->(fork)-[:FORK_OF]->(repo)
WITH (pre.time — forkEvent.time) as diff
RETURN avg(diff);
+-----------+
| avg(diff) |
+-----------+
| 1152.5 |
+-----------+
1 row
41 ms

What is the average number of comments on a PR before the PR is merged?

MATCH p=(ice:IssueCommentEvent)-[:ISSUE_COMMENT]->(comment:IssueComment)
-[:COMMENT_ON]->(issue:Issue)-[:BOUND_TO_PR]->(pr:PullRequest)
<-[:PR_MERGE]-(pre:PullRequestEvent)
WHERE ice.time <= pre.time
WITH pr, count(comment) as comments
RETURN avg(comments)
————-
avg(comments)
2.0440251572327046
Returned 1 row in 713 ms

Which User has the most MERGED PR’s on Repositories not owned by him?

MATCH (u:User)-[r:DO]->(fe:PullRequestEvent)-[:PR_OPEN]->(pr:PullRequest {state:’merged’})
-[:PR_ON_REPO]-(repo:Repository)-[:OWNED_BY]->(u2:User)
WHERE NOT u = u2
RETURN u.name, count(r) as prs
ORDER BY prs DESC
LIMIT 1
————
u.name prs
adamralph 14
Returned 1 row in 365 ms

Relate together Users having Merged PR’s on same repositories, could serve as Follow Recommendations Engine.

This will also create a weight property on the relationship incremented each time a relation between the 2 users is found :

MATCH p=(u:User)-[:DO]-(e:PullRequestEvent)→(pr:PullRequest {state:’merged’})-[:PR_ON_REPO]->(r:Repository)<-[:PR_ON_REPO]-(pr2:PullRequest {state:’merged’})—(e2:PullRequestEvent)<-[:DO]-(u2:User)
WHERE NOT u = u2
WITH nodes(p) as coll
WITH head(coll) as st, last(coll) as end
MERGE (st)-[r:HAVE_WORKED_ON_SAME_REPO]-(end)
ON MATCH SET r.w = (r.w) + 1
ON CREATE SET r.w = 1
————-
Created 1122 relationships, returned 0 rows in 2334 ms