Introduction to Nakamoto Terminal

A flexible data aggregation/analytics system

Nicholas Gans
Jul 22 · 4 min read
Example NTerminal Dashboard in Splunk

Nakamoto Terminal (NTerminal) is a data-neutral aggregation/analytics system. It is currently being used primarily for cryptofinance, but later will branch out into traditional finance and beyond. The system works to pull heterogeneous data types (supplemented with additional intelligence) into one place so that they can be compared, contrasted, and combined by the user.

NTerminal has a flexible spring-based microservice framework. NTerminal’s data pipeline (referred to as the content delivery chain, or CDC) consumes various data streams through an array of source modules. The data is then routed to low-latency endpoints or to various processors which filter, enrich, or modify the information. Sink modules facilitate the transfer of data to our Splunk Platform, where clients and developers can query/visualize/further manipulate it, or to clients directly.

NTerminal’s Content Delivery Chain

Data Sources

The source modules differ depending on the mechanism of consumption, the type & format of data, and individual automation requirements. The sources can be generalized into three main categories: financial, technical, and natural language data types.

Financial Data

Nakamoto Terminal currently monitors and provides market data for over 5,000 digital assets, including Bitcoin and its derivatives, Ethereum, Ethereum Classic, and the thousands of tokens based on their networks. Market data includes feeds from (200+) exchanges, OTC providers, index prices, and P2P markets. The level of granularity in the data for each digital asset depends on a number of factors, such as the number of trading venues that list the digital asset.

Technical & Blockchain Data

NTerminal continually collects and provides blockchain data and associated metadata to our clients. NTerminal runs blockchain nodes within NTerminal infrastructure. The system also pulls relevant information from multiple 3rd party blockchain explorers. Different metadata and blockchain content is available depending on the nature of the blockchain.

Natural Language Data

The natural language data NTerminal aggregates and analyzes includes data from traditional media sources (e.g. New York Times articles), social media (e.g. Twitter & Reddit), messenger channels, tech blogs, Github activity and the meeting minutes and decisions of financial regulators around the world (for example we have every decision from the SEC since 1992).

Please refer to our documentation for data models, available methods of integration, and supported digital assets/markets.

Building on NTerminal

Yupana, Yachay, Ch’aska, and Qhatu are stand-alone projects which are integrated to supplement NTerminal. Each of these projects contain various components which interconnect with both each other, and existing NTerminal modules. Each project can leverage the existing processors, sources, and sinks to allow for fast testing, iteration, and implementation. Also, because of this architecture, projects are not at risk of compromising any existing NTerminal functionality and can easily be individually modified and maintained.


NTerminal’s Yachay project provides Natural Language Processing (NLP) modules. This project allows for keyword analysis, context lookup, automatic translation, optical character recognition, entity tagging, and event drill down functionality.


The Yupana project is an adapted agent based modeling effort for understanding complex systems. By consuming data produced by and about distinct communities within a system, Yupana creates a real-time model of the system to better understand their roles and interactions.

You can learn more about the Yupana Project by reading the initial white paper and subsequent blogs.


Qhatu is an open-source client-server app that allows making crypto-currencies trade strategies, and execute them with external services (NTerminal, CryptoWatch, etc.). The product allows creating orders in the web interface by forms, processing them with incoming data from NTerminal.


Ch’aska is a machine learning project within Inca Digital Securities, promoting the use of novel data processing techniques for the development of intelligent interfaces. The effort is primarily focused on creating a library of heuristic tools which can be called upon by other modules. Ch’aska works alongside Yupana to facilitate various data transformations and NTerminal to produce predictive indicators, however the tools outlined in this module can operate within multiple frameworks.

Nicholas Gans

Written by

Interest and Experience in: Molecular Neuroscience, Pharmacogenetics, Complexity Science, CryptoFinance, Data Science, and Clinical Informatics.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade