A Guide to Fetching World Policy Interest Rates from BIS in R

Dima Diachkov
Data And Beyond
Published in
7 min readSep 10, 2023

In the realm of financial analysis and economic research, accurate and up-to-date data on world interest rates is crucial. These rates influence everything from investment decisions to policy-making.

You may think, that in order to get timely key policy interest rates data, you need to pay to S&P, Refinitiv, Bloomberg, or any other data provider. But you are wrong.

In this tutorial, we’ll walk through the process of obtaining world interest rate data from the Bank for International Settlements (BIS), a reputable source for financial and economic data with R, our favorite tool.

Credits: Unsplash | Aleksandr Popov

What is BIS?

The Bank for International Settlements (or BIS) is an international financial institution that serves as a bank for central banks. That is why it has a lot of data regarding central banking, banking, financial instruments, and so on. It facilitates cooperation among central banks and financial authorities, provides banking services to international organizations, and conducts research on monetary and financial stability.

BIS provides a wide range of economic and financial data, including interest rates, exchange rates, and more. To access their data, visit the BIS website and navigate to the “Statistics” section. For policy rates, just jump to https://www.bis.org/statistics/cbpol.htm

BIS webpage for parsing interest rates

Just hover your mouth over XLSX file or CSV to copy its name, e.g. https://www.bis.org/statistics/cbpol/cbpol.xlsx or https://www.bis.org/statistics/full_cbpol_d_csv_col.zip.

According to the name format, names are static. So you just need to tune your code once to use it any time you need it without fine-tuning.

Of course you can save this files, clean them in Excel manual, but this is not how we work here. We automate it as we always did.

First, if you open the file, you will see three sheets, one of which is about monthly data for policy rates by country (sheet is called “Monthly Series”).

Structure of the Monthly time series with rates

Now we know what we want to get and from where. Let’s write our code. All we need to do is to open the Excel file from the URL above and take data from the sheet “Monthly Series”

Parsing

We need readxl package to open the Excel file and we could use httr package to save the file locally to facilitate reading.

# Load the readxl and httr package
library(readxl)
library(httr)

Let’s start with feeding the path to the right file on the internet (1). Then we specify which sheet we need to read (2). And finally, when all is set, we write the file into a temp fileholder (3).

# 1. URL to the Excel file
excel_url <- "https://www.bis.org/statistics/cbpol/cbpol.xlsx"

# 2. Specify the sheet name or index you want to read (e.g., "Data" or 1 for the first sheet)
sheet_name <- "Monthly Series"

# 3. Create temp file for parsed Excel file
GET(excel_url, write_disk(tf <- tempfile(fileext = ".xlsx")))
Output for the code above

Alright, now we can read the particular sheet (4) and have a first look at the data (5). We see below, that the country-level data is available, where the bottom-left column has the time period and each column carries value regarding each country.

# 4. Read the Excel file
raw_data <- read_excel(tf, sheet = sheet_name)

# 5. View the first few rows of the imported data
head(raw_data)
Output for the code above

Cleaning

Let’s perform necessary data cleaning steps, such as handling missing values or formatting inconsistencies.

It seems that the first 3 rows messed up our plan. We need to remove it before import (6) and then check out our data again (7).

# 6. Read the Excel file without 3 first rows
raw_data <- read_excel(tf, sheet = sheet_name, skip = 3)

# 7. View the first few rows of the imported data
head(raw_data)
Output for the code above

Great, even the formatting issue is not a question anymore. Data is here already and we can do our analysis. Each country has a column, which starts with “M:” and then the country code (“M:” means “monthly”).

Analysis

Let’s have a look at something, e.g. US vs EU vs UK policy rates. You can choose any column, depending on your interest.

# 8. Select only 3 columns with dplyr
library(dplyr)
selected_data <- raw_data %>%
select(Period, `M:US`, `M:XM`, `M:GB`)

# if you explore the dataset, you will easily get the XM is a Euro area code, while others are intuitive

Now let’s use our knowledge in ggplot2 and plot time series. My previous article about the basics of time series plotting in ggplot2 in R is available here.

Now, considering that we have the dataset, we can plot it. For this data structure, I prefer to map each area to the color with scale_color_manual function.

# 9. Create the time series plot
library(ggplot2)

ggplot(selected_data, aes(x = Period)) + theme_bw() +
geom_line(aes(y = `M:US`, color = "United States")) +
geom_line(aes(y = `M:XM`, color = "Eurozone")) +
geom_line(aes(y = `M:GB`, color = "UK")) +
scale_color_manual(values = c("United States" = "blue", "Eurozone" = "green", "UK" = "red")) +
labs(
x = "Period",
y = "Value",
title = "Policy Rates in Major Economies",
color = "Area"
)
Output for the code above

Let’s turn around our data and look at the scatter point view. Let’s try to compare how policy rates changed since the end of 2022 to the most recent data point available — 2023–08–31. To do that we will filter the dataset first and then simply transpose it.

# 10. Filter the dataframe to include only the desired periods
library(tidyr)
filtered_data <- raw_data %>%
filter(as.Date(Period) == "2022-12-31" | as.Date(Period) == "2023-08-31")

# Transpose the dataframe
transposed_data <- filtered_data %>%
pivot_longer(cols = -c(Period)) %>%
pivot_wider(names_from = Period, values_from = value)

After this, let’s look at the data.

# 11. Create a scatter plot
ggplot(transposed_data, aes(x = `2022–12–31`, y = `2023–08–31`)) +
geom_point() +
labs(
x = "Value on 2022–12–31",
y = "Value on 2023–08–31",
title = "Scatter Plot of Values on Different Dates"
)

Wow. We seem to have an outlier. Let’s add country labels.

#12. Create a scatter plot with labels
ggplot(transposed_data, aes(x = `2022-12-31`, y = `2023-08-31`, label = name)) +
geom_point() +
geom_text(vjust = -0.5, hjust = 0.5) + # Adjust text position
labs(
x = "Value on 2022-12-31",
y = "Value on 2023-08-31",
title = "Scatter Plot of Values on Different Dates"
)
Output for the code above

Well, it looks like Argentina has pretty high rates. Let’s filter it out to see how all other countries are distributed.

# 13. Create a scatter plot with labels
scatter_plot <- ggplot(transposed_data %>% filter(name != "M:AR"), aes(x = `2022-12-31`, y = `2023-08-31`, label = name)) +
geom_text(size = 5, vjust = -0.5, hjust = 0.5) +
labs(
x = "Value on 2022-12-31",
y = "Value on 2023-08-31",
title = "Scatter Plot of All Columns"
)
scatter_plot
Output for the code above

Now, let’s highlight three areas we looked at before, namely the US, Eurozone and UK and add 45 degree line to show the migration over 2023.

# 14. 
# Create a new column to specify colors based on the condition
transposed_data <- transposed_data %>% filter(name != "M:AR") %>%
mutate(color = ifelse(name %in% c("M:US", "M:XM", "M:GB"),
name,
"Other"))

# Create a scatter plot with labels and custom colors
ggplot(transposed_data, aes(x = `2022-12-31`, y = `2023-08-31`, label = name, color = color)) +
geom_text(size = 2.5, vjust = -0.5, hjust = 0.5) +
scale_color_manual(
values = c("M:US" = "blue", "M:XM" = "green", "M:GB" = "red", "Other" = "black"),
guide = FALSE # Remove the color legend
) +
geom_abline(intercept = 0, slope = 1, size = 0.5, linetype = "dashed") +
labs(
x = "Value on 2022-12-31",
y = "Value on 2023-08-31",
title = "Scatter Plot of Selected Columns"
)
Output for the code above

Wrap-up

That should conclude our process. We successfully obtained BIS data directly from their website, read and comprehended it, and even performed some preliminary analyses. You now have access to this dataset for your research whenever you need it.

Please clap 👏 and subscribe if you want to support me. Thanks!❤️‍🔥

--

--

Dima Diachkov
Data And Beyond

Balancing passion with reason. In pursuit of better decision making in economic analysis and finance with data science via R+Python