GRAKN.AI is a knowledge graph data platform that uses machine reasoning to simplify complex data processing for AI applications. Learn more at grakn.ai
Our company started in July 2015. So 2016 was the first full year that we experienced as a team, and we cannot be more thankful for the journey that we went through this past year. 2016 was truly an amazing year for all of us at GRAKN.AI, and I hope it has been the same for all of you as well.
Overall, 2016 has been an incredible growth year for us at GRAKN.AI. We released a prototype application, Moogi: a film discovery engine powered by Grakn. We launched our online platforms: from a chat platform to a discussion forum, a blog and a documentation platform. We grew to a solid team of 17, after reviewing 1400 candidates through a 6stage recruitment process. We scaled our project management framework to manage 18 projects, completing 500 requirements with 9000 work items and 10000 pre-estimated hours. We found our new name that uniquely describes us. We released the Grakn, from 0.1 (Beta) in September all the way to 0.9 (almost production) right before Christmas. We gave plenty of demos to numerous enterprises, and the feedback has been very positive. Last but not least, our open-source community started growing at a rate that exceeded our expectations.
In this final blog post of the year from us, I will share the work that we have done at GRAKN.AI with all of you, and I would also like to thank everyone involved with the development of our company in whatever way or form. :)
Enjoy the read as we take you through our 2016 journey …
We released a prototype application: Moogi.co
We developed a semantic search engine for film discovery powered by Grakn!
As the year started, we entered the last month of our prototype development deadline. The purpose of our prototype delivery was to demonstrate one of the applications of our knowledge graph technology: a semantic search engine for web-scale applications. So we built Moogi: a film discovery engine, powered by Grakn under the hood (at the time, Grakn still had the old name ‘Mindmaps’). We integrated publicly available movie information from many sources, did some machine learning to generate more content tags, and combined data from the GuideBox API as well. We also built a distributed cluster for the knowledge graph and a web application for the search engine. We ended up with a discovery engine where you can pretty much find any film released before 2016, with search capabilities, unlike any other movie discovery platform. Most importantly, we only spent six months developing it, all while doing recruitment, developing the knowledge graph prototype, natural language query parser and an ETL (Extraction Transformation and Loading) data pipeline!
To this date, Moogi is still running live on the internet, even though we have not spent much time maintaining it. You are most welcome to try it out: https://moogi.co/, and here’s an example query to start with: “action movies filmed in South Africa filmed in the 2000s”.
Thank you GuideBox for enabling to integrate movie streaming links from all kinds of platforms into our Moogi Knowledge Graph!
While the development of our knowledge graph platform was on its way, we started preparing the online platforms to build our open-source community. We wanted to make sure that we have all the communication channels we need to build and help our developers, as well for the community to help each other. It boils down to three types of a platform: a chat platform for lightweight and ongoing and conversation, a discussion forum for in-depth discussions on specific topics, and a blog for us to share our stories. So we went on and used Slack as our Grakn Community chat platform, built our own discussion forum using Discourse, and integrated Medium as our website’s blog as our publishing platform! I am very pleased to see that all three of our platforms are very active and that I consistently find content that I can learn from.
We also did a makeover on our website to describe our technology better, and added the one most important thing that an open-source technology needs: a documentation platform! Although it is still in its infancy at the moment, there is much work going on to our documentation platform, and there will be plenty of improvements coming in the next few months!
Thank you to the team behind Jekyll for building a tool that allowed us to easily integrate our documentation on GitHub to our website!
We grew to a solid team of 17
We reviewed 1400 candidates through a 6 stage recruitment process. We now have a solid team of 17 where we know we can depend on each other.
When we started this company a year and a half ago, there was one thing that we decided we will never compromise, and that is the talent of people we bring on board to join the team. So from day one, we set up our own six stage recruitment process. As you can expect, this results in a very low number of engineers passing through to the last round; the passing rate was about 1 out of 100 applicants. Naturally, recruiting becomes a challenge. So we “hacked” our software development framework to also become a distributed recruitment management platform so that we can increase the number applicants we can review and distribute the massive number of recruitment tasks across everyone in the team. By the time we enter the second half of this year, we have already managed to review 1000 engineers from across the world, in addition to the 400 that we reviewed the year before.
In our interview process, we look not only for engineering skills but also other core competencies such as communication, analytical, design, brainstorming, and interpersonal skills. It is vital to us that our team members are highly motivated and self-driven. After going through our process of building a team of 17, I am proud to say that we have successfully built a team with the criteria we have just described. Everyone also have Masters and PhDs in advanced computing and quantitative disciplines from top universities including Cambridge, Oxford, Imperial, UCL, Edinburgh, and Columbia. However, what makes 2016 special with regards to our team is the fact that we have built a solid relationship with each other, we can always rely on one another, and we can always crack a joke and have fun.
Personally, I know that I always learn from everyone in the team. Thank you, guys!
PS: If you want to find out more about our team members, visit https://grakn.ai/about
We scaled our project management framework
This year we managed 18 projects and completed over 500 requirements which translated to 9000 work items with a total of 10000 pre-estimated hours!
Our development is quite Agile, as things change rapidly, and we embrace it. We practice Scrum as we develop in weekly iterations of sprints. Releases used to happen every two months, but now we release every 2 or 3 weeks. However, we still hold some core values of Waterfall, as we love the requirements and design phase, except that our requirements log is always open, always updated, always reviewed and always feeds into the development iteration. Imagine a continuous design phase of the Waterfall SDLC which feeds into an Agile team who develop in Sprints every week. We also extended the framework to manage non-engineering work.
Towards the end of this year, we were managing 18 projects and completed over 500 requirements which translated to 9000 work items with a total of 10000 pre-estimated hours! Needless to say, we are very proud of how productive our team has been. It was our team’s ability to plan that enabled us to deliver our technology in just about a year!
We found our new name
Grakn stands for a graph of knowledge, Graql is the graph query language for Grakn, and we’re an AI technology. Thus, we’re GRAKN.AI.
As most of you probably know, we used to be called Mindmaps. What we did not consider when we named our company “Mindmaps” was how hard it would be to search for us on Google, as there are so many technologies on the web with the name “Mindmaps”. Some people were also confusing our technology with mindmapping tools. Since we also found out that we could not trademark the name, we knew that we had to find a new name that uniquely describes us.
Towards the second half of this year, we found that new name. “Grakn” stands for a graph of knowledge (what we turn your data into), and “Graql” is the graph query language (used to query data from/to Grakn). Given that our work is grounded in the field of Knowledge Representation and Reasoning, a subfield of Artificial Intelligence, and the technology applications will also be in the field of applied AI, we decided the name of our technology to be GRAKN.AI. The name also fits well with the new “.ai” domain! If you are wondering what “Grakn Labs” is, it is the incorporation name of our company and how we refer to our office.
We released the Grakn!
Nine major components of the system built in parallel integrated into one Beta release, 0.1, and refined over nine releases, to 0.9.
Once we got the prototype and Moogi completed around April, it was time to build the complete knowledge graph framework, and there were so many components needed to complete the framework!
There were nine principal components of the system: the Graph API, a query language (Graql), the graph analytics engine, the reasoning engine, a migration system, the database factory, the graph engine/server, the visualisation dashboard and the benchmark dataset generator. Every engineer was in charge of at least one major component, and development of all parts of the system had to happen in parallel.
These were some of the heaviest days, but also some of the best that we will always remember. For most of us (including me), this was our first open-source product release in the history of our careers. So it was very exhilarating!
We finally got all components of the stack integrated into one Grakn stack for the first time in September, and we immediately published a 0.1 Beta release! The purpose of the release was to gain some light adoption, user testing, prototyping applications (we blogged about them) and getting user feedback.
As it happens, our open-source adoption for this beta release was a little more than we expected (which I will talk about below). The feedback we received allowed us to iterate on the stack rapidly. Every two weeks we completed significant improvements and released a new version of the stack.
Grakn is now version 0.9, which we released the week right before Christmas. It is almost feature-complete for production release (1.0) coming very soon, and already much simpler, smarter, faster and prettier than the 0.1 release we made only three months before!
After about a year of challenging and ambitious engineering, I cannot be prouder of our team for the Grakn release we have accomplished this year!
We talked with many enterprises
After many demos and feedback, we learnt that companies working on building AI systems more urgently need our technology to simplify their data processing.
Since we released Grakn Beta in September, we have been attending some excellent conferences around the world, such as O’Reilly AI NYC, ISWC Japan, and AI World SF, as well as plenty of Meetups at home here in London.
From these events, we met with lots of organisations of many sizes, from startups to enterprise. We have given live demos to many of them, and the feedback has been very positive. One crucial thing that we learned from this feedback is that companies with the most urgent need for our technology are the ones working to build AI applications. Such use cases involve incredibly complex data processing: exactly what Grakn can help simplify. Given this positive, early feedback, and the discussions with enterprises we’ve been having, we are very confident that 2017 will bring big things in terms of enterprise adoption!
Our open-source community started growing
Since our Beta release 3 months ago, Grakn has been downloaded 813 times! We have 50+ developer community members, 265 twitter followers, 82 Github stars and 25 forks!
It is important to remember that Grakn is not a consumer app, like iPhone games or dating apps. Grakn is an enterprise / deep infrastructure; its adoption rate is therefore not straightforwardly comparable to the explosive growth you’d expect to see in a successful consumer app.
Nevertheless, the response from the open-source community since our Beta release has exceeded all expectations. Since we have been prioritizing rapid, responsive, and robust development in this early stage of community building, we have not yet worked to optimize conversion rates, engage in user testing of our website, or try out some SEO magic; nor have we yet rolled out a complete marketing campaign. However, we see an average of 10 downloads per day. Since our Beta release 3 months ago, Grakn has been downloaded 813 times! We have 50+ developer community members, 265 twitter followers, 82 Github stars and 25 forks! We know this is still the early days, but as a deep infrastructure technology, these numbers way surpassed our expectations for a three months period from our Beta release!
Thank you. This journey would not have been possible without all of you.
So those are the highlights of the past year here at GRAKN.AI. Reflecting back on them, we cannot help but realise how much joy and how many lessons this year has given us. This year has allowed us to grow at a rate that we have never had before, and all of it was possible because of every single person involved in our journey. The hard work and sacrifice of our team, the encouragement from our family and friends, the advice from our mentors, the feedback from our users, the support of our partners, the relationships with our readers, the friendship of everyone we met along the way, they all mattered a great deal to the growth of ourselves and of our company this past year, and we know that they will continue to matter immensely as we move into 2017.
Thank you, everyone. Thank you, my Grakn family. This journey would not have been possible without all of you.
To many more great years ahead of us!
PS: If you would like to join our journey next year, please visit https://grakn.ai/community and feel free to get in touch through your preferred platform.