Seeder
an app for smarter referencing + mindmapping
I started building Seeder in November 2013, as part of a university project in co-operation with Manchester Metropolitan University.
The premise was to build a reference management/discovery app that had the conveniences of mind map layouts. We also wanted to encourage collaboration and conversation about references, contributing to the peer review process.
Proof of concept
some screens of what I built in late 2013 as part of a “proof of concept” milestone.
I kept the interface design the same (since the POC) on the whole, the main changes came when I migrated the system from my own css, to using a css framework. It became apparent that I would be programming the site myself and using a css framework meant I could build the pages of the site out faster. I moved the code from my ham fisted CSS to Semantic UI
Advances from Proof of Concept

The graph renderer is the biggest difference between the proof of concept and the final product. The POC was using a large and wide featured library called SigmaJS, It was not built for graph creation and editing, but rather data visualisation, despite this it was easy to modify it for our purposes. Later I found a lighter library called Arbor, i decided to move the rendering over to this due to its simple public interface and exposed renderer. Each instance of the library uses a renderer specified by a javascript function. Sigma’s renderer was universal, and would have required more hacking for our purposes. The trade off was that Arbor’s simpler renderer means we lost effects like fisheye in the renderer, but this can be implemented later.
As well as decisions in the front end I also had various back-end architecture decisions to make. Data storage for the website was one of the first implementation decisions. Persistent storage would be needed to hold graph data so that it could be saved and shared with other users. The final choice was MongoDB. SQL was not considered due to the fact that i would either have to mutate the Javascript Data structures in order to store them in SQL tables, or store graph data as “stringified” JSON. MongoDB stores data in JSON objects instead of tables, which meant i could be flexible about the data structure and retrieve data in very much the same form it was stored in.

Once the database system was implemented, the interaction between it and the user had to be defined. To save a graph a user was editing i had two options;
2. Use socket connections to stream the data to the server, then pass it to the database
The socket route was chosen as it allowed for a constant connection, each time the user wanted to save the graph wouldn’t require a POST request to the server, and the user could save at any time while remaining in the build interface. Socket.IO was a library used for javascript web-sockets, it turned out to be a good approach, a big advantage was that it was very easy to notify the client of server responses in real time, for example when confirming that a graph has been published (and pushed the database successfully)

After the proof of concept stage I needed to begin thinking about how to deploy the app to a public server. Previously, as part of the course, we had used Azure, which had the ability to run NodeJS apps, although with some tinkering. Research found that I could get an amazon EC2 Linux instance with a public IP for free. This was preferable as it could be accessed simply by SSH and required much less ceremony than the Azure equivalent.
Azure, however, did offer a feature that allowed for automatic redeploy of applications every time a specified Github repository was pushed to. This was the main feature drawing me toward Azure. I liked this feature so I decided to clone it. I wrote a server that used Github’s web-hooks API to re-deploy our code on the development server. By implementing this myself I could filter commits and only redeploy to the server if my commit message was something like ;
git commit -m “added some feature [redeploy]”
This saved a lot of time during development, and meant it was easy to push changes to the live site so the client could see them. It also meant I could push to the remote repository without fear of destroying the live server.
The code and documentation for the web-hook server can be found at : https://github.com/ammanvedi/hooker
A lot of effort was spent creating the graph builder, alongside this I also created the other pages of the site, which would be necessary to launch it ready for users. (home, help, docs and explore pages)
The documentation was auto generated using a command line tool called Doxx, it outputs to html and has simple options for templating.
A problem that took a long time to be resolved was the search facility, initially this was implemented using Google’s custom search API, this allowed us to gather data from a finite set of domains, and although the clients made it clear that the site should provide a selection of research articles in search results, I had two main problems
- Some API’s prevented my GET requests due to the “Same Origin Policy”, this also ruled out html data scraping
2. There is only a small selection of API’s that provide research article data that cover papers from a range of disciplines
the data could be gathered from these sources separately and combined, but that would mean making a lot of separate http requests, and more reliance on other services, I thought it better to find a more complete repository of articles.
The two main contenders became Microsoft Academic Research (and their api hosted through Azure Data Market) and Mendeley’s API. On closer investigation, the Microsoft API did not provide any function to allow traditional search by keyword/ phrase, which was disappointing considering the size of their datasets. I found Mendeley, a site that provided a public API with academic article search functionality, as well as paper metadata.

The Mendeley API will be implemented into Seeder when their system has been fully migrated to OAuth 2 (they are no longer registering OAuth 1.0 apps), after which I have been informed they will be allowing cross domain requests for registered applications. This will solve both problems stated above, I have implemented a JS interface to the API ready for when their servers are migrated (early May), however mendeley have granted us early access to the API so this date will be moved forward.
The code and documentation for the web-hook server can be found at : https://github.com/ammanvedi/MendeleyJSAnonymous
At the current moment in time a single server (Amazon EC2) is running the app. In the future if the apps user base was to grow this would not be sufficient, I will have to think about either adding a load balancing proxy (such as NGIX) or running apache alongside node.
Javascript has been fun throughout this project, quirky as it is sometimes. However i’m considering the move to a framework like angular, to give the app a structure it can grow with, but thats a whole other blog post.
The Present and Future
The site is live and kicking, although quietly since late May. Its still a work in progress, but a working one. I am coding a couple of days a week alongside work, and a full featured 2.0 public beta is on the cards for later this year.
My skills in many areas of design and coding have improved through the 8 months i’ve been working on Seeder, so its only appropriate to think about updating the facade, i’ll leave you with some early concepts.
Email me when Amman Vedi publishes or recommends stories





