With the recent COVID-19 pandemic, the implementation of quarantine and social distancing by various governments created a ripple of effects that affect cities all around the world. The key driver behind this was the halting of economic activity which brought about a big crunch in the global economy. In the early phase of the COVID-19 pandemic, the media reported that the carbon emissions in China was reduced drastically in mid Feb 2020. …

The Thompson Sampling algorithm utilises a Bayesian probabilistic approach to modelling the reward distribution of the various arms. As a short summary, Bayes rule is simply formulated as the following:

where `D`

represents the data observed, `P(θ|D)`

is our posterior, `P(D|θ)`

is the likelihood of observing the data given θ, and `P(θ)`

is the prior belief on the distribution of θ.

In our previous analysis (Epsilon-Greedy, Softmax, Upper Confidence Bound), we have assumed that the arms can be modelled as a Bernoulli distribution, with θ_arm representing the parameter of successful reward for each trial. …

The Upper Confidence Bound (UCB) algorithm is often phrased as “optimism in the face of uncertainty”. To understand why, consider at a given round that each arm’s reward function can be perceived as a point estimate based on the average rate of reward as observed. Drawing intuition from confidence intervals, for each point estimate, we can also incorporate some form of uncertainty boundary around the point estimate. In that sense, we have both lower boundary and upper boundary for each arm.

The UCB algorithm is aptly named because we are only concerned with the upper bound, given that we are…

Moving beyond the Epsilon Greedy algorithm, the Softmax algorithm provides further optimisation in terms of enhancing the chance of rewards during exploration.

To get a better intuition, consider the following two cases of 2 Bernoulli arms:

- The first arm has 0.1 reward average while the other has 0.9 reward average.
- The first arm has 0.1 reward average while the other has 0.11 reward average.

Using Epsilon Greedy in both cases, for a specified epsilon percentage of exploration trials, the algorithm will choose randomly between both arms in both situations regardless of how different the average rewards between both arms are.

…

The Epsilon Greedy algorithm is one of the key algorithms behind decision sciences, and embodies the balance of exploration versus exploitation. The dilemma between exploration versus exploitation can be defined simply based on:

- Exploitation: Based on what you know of the circumstances, choose the option/action that has the best average return.
- Exploration: Recognise that what you know of the different options may be limited, and choose to engage in options that may potentially reveal themselves to be of high return

By convention, “epsilon” represents the percentage of time/trials dedicated for exploration, and it is also typical to do random exploration…

If you have been following the NBA 2019/20 season, you may have heard about Andre Drummond’s recent trade and the lack of interest in him by other teams during the phase of trade rumours. A center like Andre Drummond has strong post-up offensive skills and rebounding capabilities but it may not fit the modern NBA game that has a demand on 3 point shooting and floor spacing. A motivation for this analysis topic is to verify how the big men positions have evolved to fit the current systems with respect to their 3 point shooting abilities.

With the previous article…

With the data from the ESPN NBA website for the regular NBA season of 2018–2019, I will try to model Bayesian Hierarchical modelling to find out the 3 point percentages across the different basketball positions.

The ESPN dataset comprises of players stats with player position assignment as listed in the following:

- PG: Point Guard
- SG: Shooting Guard
- G: Guard (probably for players who can play both PG and SG)
- SF: Small Forward
- PF: Power Forward
- F: Forward (probably for players who can play both SF and PF)
- C: Center

Each act of 3 point shooting is a dichotomous action: for…

A seasonal autoregressive integrated moving average (SARIMA) model is one step different from an ARIMA model based on the concept of seasonal trends. In many time series data, frequent seasonal effects come into play. Take for example the average temperature measured in a location with four seasons. There will be a seasonal effect on a yearly basis, and the temperature in this particular season will definitely have a strong correlation with the temperature measured last year in the same season.

`library(astsa)`

plot(tempr) **# LA temperature measured from 1970 to 1980**

The plot above shows the yearly cyclical rise and fall…

In some time series, the variance within a time series can have particularly increased amplitudes during the course of the time period observed. Transformations can be applied to the time series, in particular the log transformation, to coerce some form of stationarity.

One key example would be the **Paleoclimatic Glacial Varve** time series available in the **ASTSA** package. It measures the amount of sedimentary deposits from melting glaciers that ends up in a particular location, and these measurements will be greater when the year is warmer (due to the melting of the glaciers).

`library(astsa)`

plot(varve)

Notice how in some particular…

The I in ARIMA stands for “integrated”, and it has to do with the differencing in time series. This concept is often used for eliminating the trends in time series to make it stationary, and can be better illustrated with some examples of moving trends.

Recall that for a time series to be stationary, it needs to have a mean value function that is constant and thus *time independent.*

Assuming a time series ** x_t **has:

- a non-stationary linear trend component
and*u_t* - a zero-mean stationary time series
*y_t*

Applying the concept of differencing would result in the following: