Data in Action: How to get Trading Volumes for $USDC PancakeSwap Pool using Tsunami API & Python

Tutorial: USDC PancakeSwap Trading Volumes

PARSIQ
PARSIQ
4 min readJul 19, 2023

--

Offering nearly instant historical and real-time data on the most popular blockchains, the Tsunami API is a powerful tool available to Web3 developers. Not only does it provide fast and reliable data, but it’s also easy to use!

In fact, we’ve specifically designed it to be simple. We aim to provide developers with the resources to easily create the platform they envision. That’s why we offer things like our ABI Decoder, JS Client, and a comprehensive NFT Data Lake. We also make the use cases plain and simple, for instance, the uses of our endpoints, how our data serves DEXs, and the importance of fast, accurate, and reliable data!

Today, we have our very own Ivan Ivanitskiy offering us a helpful tutorial! He demonstrates just how easy it is to use the Tsunami API to get the valuable data you’re looking for.

The tutorial below focuses on how to get USDC trading volumes on Pancakeswap using simple Python scripts. Where the first video gives an overview of the process, the second features Ivan walking you through that process in detail. Below the videos you’ll find the Python script, which you can just copy and paste and begin using yourself!

Ready to sign up?

How to get trading volumes for USDC PancakeSwap pool using PARSIQ’s Tsunami API and simple Python scripts

For a quick overview of the process, have a look at this video:

Now that we’ve seen how simple it is, here’s Ivan walking us through the steps:

Starting to use the Tsunami API is really this easy!

To check out everything that the Tsunami API can do — check out our docs here. And make sure to bookmark and visit regularly for updates!

Python scripts

And finally, as promised, below you’ll find the Python scripts used in the video, so you can try it on your own!

Request API

import requests
import csv

# Define the API endpoint and the bearer token
# BUSD/WBNB Pool: 0x58f876857a02d6762e0101bb5c46a8c1ed44dc16, BUSD: 0xe9e7cea3dedca5984780bafc599bd69add087d56
# USDC/WBNB Pool: 0xd99c7f6c65857ac913a8f880a4cb84032ab2fc5b, USDC: 0x8ac76a51cc950d9822d68b83fe1ad97b32cd580d
api_url = 'https://api.parsiq.net/tsunami/eip155-56/v1/address/0xd99c7f6c65857ac913a8f880a4cb84032ab2fc5b/transfers?timestamp_start=1671019200&timestamp_end=1678795200&contract=0x8ac76a51cc950d9822d68b83fe1ad97b32cd580d'

# Set the headers to include the bearer token
bearer_token = 'YOUR_BEARER_TOKEN'
headers = {'Authorization': 'Bearer ' + bearer_token}

# Define the pagination and offset parameters
per_page = 1000
offset = None
has_more_data = True
header_written = False

# Make requests until all data is retrieved
while has_more_data:
# Set the pagination parameters in the query string
params = {'per_page': per_page, 'offset': offset}

# Make the API request with the headers and pagination parameters
response = requests.get(api_url, headers=headers, params=params)

# Parse the JSON response
json_data = response.json()

# Set the offset to the value returned in the response
offset = json_data['range']['next_offset']
items = json_data['items']
fieldnames = items[0].keys()
print(fieldnames)

# Create a CSV file and append the JSON data to it
with open('usdc_3_months_.csv', 'a', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
if header_written == False:
writer.writeheader()
header_written = True
for row in items:
writer.writerow(row)

if offset == None:
has_more_data = False

Plot Volume

import csv
import datetime
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker

# Open the CSV file and read the data into lists
currency = 'USDC'
source_file_name = currency + '_3_months.csv'
with open(source_file_name, 'r') as csvfile:
reader = csv.reader(csvfile)
header = next(reader)
timestamps = []
amounts = []
for row in reader:
timestamps.append(float(row[4]))
amounts.append(float(row[9]))

accumulated_amounts = [0]
interval_start = timestamps[0]
accumulated_datetimes = [datetime.datetime.fromtimestamp(interval_start)]
time_step = 21600 # 6 hours
# time_step = 1200 # 20 mintes
decimals = 18
j = 0
for i in range(0, len(timestamps)):
accumulated_amounts[j] += amounts[i] / pow(10, decimals)
if (timestamps[i] - interval_start) >= time_step:
j += 1
interval_start = timestamps[i]
accumulated_datetimes.append(datetime.datetime.fromtimestamp(interval_start))
accumulated_amounts.append(0)

# Create a CSV file and write the arrays to it
result_file_name = currency + '_3_months_accumulated.csv'
with open(result_file_name, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)

# Write the header row
writer.writerow(['datetimes', 'accumulated_amounts'])

# Write the data rows
for i in range(len(accumulated_datetimes)):
writer.writerow([accumulated_datetimes[i], accumulated_amounts[i]])

# Plot the data using matplotlib
plt.plot(accumulated_datetimes, accumulated_amounts)
# Set the y-axis tick formatter to use integers
plt.gca().yaxis.set_major_formatter(ticker.FuncFormatter(lambda accumulated_datetimes, pos: int(accumulated_datetimes)))
# Add labels and a title to the plot
plt.xlabel('Date and time')
plt.ylabel('Amount in 6 hour interval, USDC')
plt.title('Amounts of USDC/WBNB deals')
plt.grid(True)

# Display the plot
plt.show()

About PARSIQ

PARSIQ is a full-suite data network for building the backend of all Web3 dApps & protocols. The Tsunami API provides blockchain protocols and their clients (e.g. protocol-oriented dApps) with real-time and historical data querying abilities. ‘Data Lake’ APIs allow complex data querying & filtering for any project; specifically designed and tailored for our customers’ blockchain data needs.

Supported chains: Ethereum, Polygon, BNB Chain, Avalanche, Arbitrum, and Metis.

Website | Blog | Twitter | Telegram | Discord | Reddit | YouTube | Link3.to

--

--