The Knowledge: November 2016

This month’s reading list — by Grakn Labs

Jo Stichbury
Nov 30, 2016 · 4 min read

Here is the third in our monthly series of curated links to recent articles that the Grakn Labs team have found interesting. We aim to highlight useful discussions and news for our community of readers. Please let us know in the comments what you think or would like to see more of next month!

We are calling the posts in this series “The Knowledge” as we are building an open source, distributed knowledge graph here at the London-based Grakn Labs. We figured that the name of the blog series is especially appropriate because “The Knowledge” is also the name for the in-depth study of streets and places that taxicab drivers here in London must complete to obtain a licence to drive an official black cab.

Early this month, we were excited to read an article by Barry Zane of Cambridge Semantics entitled “Semantic Graph Databases: A worthy successor to relational databases” in “Big Data Quarterly”. The article states that:

Semantic graph databases have finally achieved performance parity with other databases, but now offer unprecedented flexibility and the ability to reasonably accommodate much richer varieties of data at volume”.

It goes on to describe how semantic graph databases can do everything possible from traditional relational database management sytems, but much more besides: better model relationships, lower cost of ownership, and faster data preparation and query speeds.

The article concludes:

Semantic graph databases are the successors of relational databases. They represent the organic evolution of the relational paradigm and its intersection with IT developments in memory, storage and computational processing. The progression from relational to semantic graph databases enhances technology, database fundamentals, and the skills required to use them in a unique way that has made smart databases undoubtedly better, faster and cheaper than their forbears”.

Here at Grakn Labs, we couldn’t have put it better!

Over on the Neo4j blog this month, we enjoyed the “5-Minute Interview” with David Meza, Chief Knowledge Architect, NASA. David speaks about using Neo4j to explore a database of “lessons learned” from the last 50 years of space exploration, over a variety of eras — such as the Apollo, shuttle and Orion. The use of a graph database has solved the ongoing challenge of joining the dots across different data silos.

This was the month of the US elections, about which much has been written, in particular about fake news stories and biased reporting. We were interested to read about a tool that uses AI to help journalists discover new angles on stories as they write.

Recently funded by Google’s Digital News Initiative, JUICE (the JoUrnalIst Creative Engine) is the result of a collaboration between Cass Business School and the Department of Journalism at City, University of London. The core product works inside Google documents, using language processing, web searches, and recommendation algorithms to suggest additional information sources to journalists to complement their work. JUICE has access to data from over 450 news sites. When a journalist types something into a Google document, such as the name of a public figure, the AI automatically retrieves relevant articles and other multimedia that could be used in their story. This tool illustrates the potential of AI systems, such as recommendation engines, to augment human understanding and work against our biases in a positive way.

Finally, we are excited to find out more about Microsoft’s Concept Graph, which opens up a knowledge base of words linked to millions of concepts to help machines grasp the meaning of full sentences to better under human communications. As Microsoft puts it, Concept Graph aims to give machines “common-sense computing capabilities” and an awareness of the mental models we humans take for granted. Concept Graph was released on 1st November and is available to download for research purposes. The current release includes the core version of concept data in English mined from billions of web pages and search queries. Future releases will include conceptualization with context for understanding short and long texts as well as support for Chinese. Further information is available on the Microsoft blog.

What we’ve been reading this month

Aside from the recent articles above, we have also enjoyed these articles:

On TechCrunch: Open Source and Coopetition are the New Normal

By Sebastien Dery: Graph-Based Machine Learning: Part 1

Coming soon…just down the road from Grakn Labs: Google Commits to Massive New London HQ

If you liked this post, please take the time to recommend it or leave us a comment. To find out more about us, please do consider joining the growing Grakn Community.

You can read last month’s review here. Join us next month to find out what’s new in The Knowledge!

Vaticle

Creators of TypeDB and TypeQL

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store