Introduction to Nakamoto Terminal

A flexible data aggregation/analytics system

Nicholas Gans
Jul 22, 2019 · 4 min read
Example NTerminal Dashboard in Splunk

Nakamoto Terminal (NTerminal) is a data-neutral aggregation/analytics system. It is currently being used primarily for cryptofinance, but later will branch out into traditional finance and beyond. The system works to pull heterogeneous data types (supplemented with additional intelligence) into one place so that they can be compared, contrasted, and combined by the user.

NTerminal has a flexible spring-based microservice framework. NTerminal’s data pipeline (referred to as the content delivery chain, or CDC) consumes various data streams through an array of source modules. The data is then routed to low-latency endpoints or to various processors which filter, enrich, or modify the information. Sink modules facilitate the transfer of data to our Splunk Platform, where clients and developers can query/visualize/further manipulate it, or to clients directly.

NTerminal’s Content Delivery Chain

Data Sources

The source modules differ depending on the mechanism of consumption, the type & format of data, and individual automation requirements. The sources can be generalized into three main categories: financial, technical, and natural language data types.

Financial Data

Nakamoto Terminal currently monitors and provides market data for over 5,000 digital assets, including Bitcoin and its derivatives, Ethereum, Ethereum Classic, and the thousands of tokens based on their networks. Market data includes feeds from (200+) exchanges, OTC providers, index prices, and P2P markets. The level of granularity in the data for each digital asset depends on a number of factors, such as the number of trading venues that list the digital asset.

Technical & Blockchain Data

NTerminal continually collects and provides blockchain data and associated metadata to our clients. NTerminal runs blockchain nodes within NTerminal infrastructure. The system also pulls relevant information from multiple 3rd party blockchain explorers. Different metadata and blockchain content is available depending on the nature of the blockchain.

Natural Language Data

The natural language data NTerminal aggregates and analyzes includes data from traditional media sources (e.g. New York Times articles), social media (e.g. Twitter & Reddit), messenger channels, tech blogs, Github activity and the meeting minutes and decisions of financial regulators around the world (for example we have every decision from the SEC since 1992).

Please refer to our documentation for data models, available methods of integration, and supported digital assets/markets.

Building on NTerminal

Yupana, Yachay, Ch’aska, and Qhatu are stand-alone projects which are integrated to supplement NTerminal. Each of these projects contain various components which interconnect with both each other, and existing NTerminal modules. Each project can leverage the existing processors, sources, and sinks to allow for fast testing, iteration, and implementation. Also, because of this architecture, projects are not at risk of compromising any existing NTerminal functionality and can easily be individually modified and maintained.

Yachay

NTerminal’s Yachay project provides Natural Language Processing (NLP) modules. This project allows for keyword analysis, context lookup, automatic translation, optical character recognition, entity tagging, and event drill down functionality.

Yupana

The Yupana project is an adapted agent based modeling effort for understanding complex systems. By consuming data produced by and about distinct communities within a system, Yupana creates a real-time model of the system to better understand their roles and interactions.

You can learn more about the Yupana Project by reading the initial white paper and subsequent blogs.

Qhatu

Qhatu is an open-source client-server app that allows making crypto-currencies trade strategies, and execute them with external services (NTerminal, CryptoWatch, etc.). The product allows creating orders in the web interface by forms, processing them with incoming data from NTerminal.

Ch’aska

Ch’aska is a machine learning project within Inca Digital Securities, promoting the use of novel data processing techniques for the development of intelligent interfaces. The effort is primarily focused on creating a library of heuristic tools which can be called upon by other modules. Ch’aska works alongside Yupana to facilitate various data transformations and NTerminal to produce predictive indicators, however the tools outlined in this module can operate within multiple frameworks.

Inca Digital

Making intelligence accessible

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store