How to Build a DeFiChain Crypto Dashboard with Streamlit and PostgreSQL — Part 2

Retrieving data from the REST API and saving it to PostgreSQL

Elmer van den Dries
6 min readDec 9, 2022
Streamlit Dashboard showing evolution of token holdings against current prices
Mid-part of the Streamlit Dashboard you will be building.

Introduction

In Part 1 of this tutorial series, you set up your database server, took your first steps in making API calls to the Ocean REST API, and created the database tables you will need for this next section. Let's get started!

If you want to code along as you go through the article, you can download all code on my GitHub page here.

Step 1 — Establishing a connection to the database server

Create a new file called get_data.py. Start with making a connection to the database server, accessing the environment variables you saved in the .env file in Part 1 of this series.

Step 2 — Requesting token data from Ocean REST API

Now that the connection is established, you can start writing your first API call to the Ocean REST API to get the tokens on the blockchain. Tokens on the DeFiChain blockchain are held in wallets identified by unique addresses. For this article's purpose, we will take a random address from the blockchain: df1q9qtltnhkn3f5wnjmjddq02tw32lfk0tuu9zl8h

Then, on the official Blockchain Explorer of DeFiChain, you can check the current holdings of that wallet address: defiscan.live/address/df1q9qtltnhkn3f5wnjmjddq02tw32lfk0tuu9zl8h

In the code below, you will keep requesting the same URL until the request returns a status code of 200, meaning a successful request. At which you will pass the request object to the — yet-to-be-defined — parse_token_data() function.

Step 3 — Parsing Ocean API request data

In step 2, you passed the request object to the parse_token_data() function. Now, you will define that function! The request object contains a JSON payload from the API call. Decode that payload using json.loads(payload). This returns a list of tokens held by the address on the blockchain.

To keep your code clean, you will save each token in 2 separate class instances of the TokenDetail Class and the TokenAmount Class.

The two classes are defined below. You need to define three special methods inside the class definition, the initialization method __init__() to automatically invoke class instantiation, the __iter()__ method to convert the object attributes into a tuple finally the __str__() method to display the class instance for debugging.

With your classes in place, you can start to loop over the tokens in the list. You will notice in the code below that if a token equals isLPS, meaning it is a Liquidity Pool token; you will need to get more data from the Ocean API.

To get that additional data, you will make another request to the Ocean API using the following URL: https://ocean.defichain.com/v0/mainnet/poolpairs/{token['id']}

With that additional data in place, you can create instances of both classes (lines 7 and 9) and update all attributes in lines 21–32.

The final step to define is saving the tokens to the database. You will do that in Step 4!

Step 4 — Saving class instance objects to the database

Start by creating the method definition with the two class instances as the arguments and one additional argument holding_save which identifies if you also want to save the token_amounts to the database:def add_token_entry(token, token_amount=None, holding_save=True).

In this method, you will try to save the data to the database, and if unsuccessful, you will catch an exception and throw the error.

To save the token to postgres, you need to define a SQL statement (line 4 in the code below). Passing parameters to an SQL statement happens by using %s placeholders in the SQL statement and passing a sequence of values as the function's second argument. The first argument is the SQL statement itself.

The sequence of values passed as the second argument to the cursor.execute() function needs to be passed as a tuple or a dictionary. Since you defined the special __iter__() function in your classes, you can quickly turn the instance object into a tuple by using tuple(instance). The cursor.execute() function of the psycopg2-package takes care of converting the SQL statement into a string representation suitable for PostgreSQL.

The complete method is defined below:

Step 5 — Retrieving and saving token prices

The final step in retrieving all Ocean REST API data is retrieving all token prices.

The API endpoint you will use for that is:

https://ocean.defichain.com/v0/mainnet/prices

Start again by defining a get_token_prices() function. Create an empty list named all_pool_prices to hold future pool_prices. Make an initial request to the above URL and save the request to pool_prices.

# get pool_prices from ocean api
poolprices_url = "https://ocean.defichain.com/v0/mainnet/prices"

# paginate over all prices in ocean api
# dict to store results
all_pool_prices = []
# get first page
while True:
pool_prices_page_1 = requests.get(poolprices_url, timeout=10)
if pool_prices_page_1.status_code == 200:
pool_prices = json.loads(pool_prices_page_1.text)
break
time.sleep(1)

The first request will contain a page key in the resulting dictionary. This key will hold the value to return the next page of token prices from the API.

First, save the previous request to the all_pool_prices list with all_pool_prices.append(pd.json_normalize(pool_prices[“data”])). pd.json_normalize() will normalize semi-structured JSON data into a flat table. Next, you will check if the page key exists in pool_prices. If it does, you will construct a new URL to retrieve the next page of token prices. If it doesn't exist, you have retrieved all prices and can exit the loop.

# get rest of pages, while loop will run as long as we have a next page in the dict
_run = True
while _run:
# merge dicts
all_pool_prices.append(pd.json_normalize(pool_prices["data"]))
if "page" in pool_prices:
while True:
pool_prices_page_n = requests.get(
f"{poolprices_url}?next={pool_prices['page']['next']}", timeout=10)
if pool_prices_page_n.status_code == 200:
pool_prices = json.loads(pool_prices_page_n.text)
break
time.sleep(1)
else:
_run = False

Now, you will have a list of Pandas data tables. Concatenate them into one dataframe with pd.concat(all_pool_prices, ignore_index=True). We are not interested in copying the indices since you will create your own.

Set a new index using the token as the index column with df_prices.set_index(“price.token”, inplace=True).

Since you won't need all columns, you can condense the data frame to only the id and price.aggregated.amount.

df_prices = df_prices[['id', 'price.aggregated.amount']]

You will be saving these prices directly to PostgreSQL, so to ensure the column names are the same, you need to rename them to match the column names in the postgres table.

df_prices.rename(columns={'price.aggregated.amount': 'price', 'id': 'pair'}, inplace=True)

You will be utilizing the build-in Pandas functionality to directly save the data frame to postgres, using the following code:

df_prices.to_sql(
'coin_prices',
con=engine,
if_exists='append',
index=True,
index_label='symbol'
)

This method takes a few arguments, the first one being the table name, second, the engine to connect to the database server, third to append to the table if the entry already exists in the table, fourth to make sure the index is saved to the table as well and finally the last argument is telling postgres which column to save the data frame index to.

The complete code to retrieve all token prices and save them to the database:

With that, you have written all code to retrieve tokens and token prices from the DeFiChain Ocean API. The same thing can be done for retrieving vault data. For reference on how to get vault data, you can check the repo on GitHub!

The final action to take is to run the script:

python get_data.py

Of course, the command above will only retrieve the data once. In your Streamlit Dashboard, you will be interested to see how the token amounts and prices will change with time. To capture this data, you will need to retrieve the data more frequently, for instance, every 10 minutes. Crontab to the rescue! Configure a cronjob (crontab -e) to retrieve the data every 10 minutes.

*/10 * * * * python3 ~/dev/elmer/get_data.py

This concludes part 2 of this series. You can finally work with the data in the third and final part by building a real-time Streamlit Dashboard.

I hope you liked reading this second part of this tutorial series and also learned something new along the way. If you’d like to support my Medium writing, please feel free to give this post a clap or follow me. If you have questions, please leave a comment, and I’ll come back to you!

--

--