Fourier Transforms Improve The Performance Of A Machine Learning Classifier

Aswin Vijayakumar.
Nerd For Tech
Published in
3 min readFeb 5, 2019

--

Introduction

Recently, I have been involved in developing a dataset for promoting learning. What I would like to highlight from my findings on classifying the neighbourhood of a layout of a building originating from a distribution is that Feature Trawling and Feature identification process are drastically improved using Fourier Transforms. The synthetic data that I use makes use of Gaussian and uniform distributions to generate the dimensions of each data row.

In the example, I used a LinearSVR regressor for fitting the domain knowledge of all the generated bounded areas into a single regressor algorithm.

I have transformed the labels into symbolic values that originate from a Fourier transform.

The algorithm seems to improve its training accuracy provided the algorithm’s initial implementation works off the shelf.

A Fourier transform is generated using simple code, given as:

import numpy as np# classes ranging from 0 to 9np.fft(np.arange(0,10,1))

The outcome in using Fourier transforms is helpful in drawing out conclusions on how we set the label from the originally assigned labels. In my case, this was simple as the neighbourhood classification value originated from formulated values. My idea was to use the dimensions of the windows, doors, and transits (the open passage) to correlate to two separate metrics that fit our knowledge of the neighbourhood.

The reason for such a drastic improvement in the accuracy score is because of the Jacobian matrix. A Jacobian matrix measures how much a measure changes in its volume. The Gradient Descent example has a cost function derivative that can be represented as the product of the Jacobian matrix and the function in parametric form.

My algorithm improved from an error rate of 2.x% to 1.x%.

More Explanation

Jacobian Matrix is expressed as:

Jacobian is the first derivative of differentiable function
Jacobian is the first derivative of differentiable function

Jacobian is the derivative of the function to be differentiated which is the cost function here across the input features (x).

Eigenvalues of Hessian

Hessian is the second derivative of the function which implies the eigen vector vector d of Hessian H is written as second derivative along the direction of d as:

Second Derivative of Hessian along the Eigen vector d
Second Derivative or Hessian along the Eigen vector d

Hence taking the Taylor Series expansion of the cost function, it can be written as:

Taylor Series expansion of Cost Function
Taylor Series expansion of Cost Function

Hence our learning rate will become:

Learning Rate of Gradient Descent involving Jacobian and Hessian
Learning Rate of Gradient Descent involving Jacobian and Hessian

Demonstration

In this example the classes have been carefully selected. Such as those features that are relating to opposite values have been signed as negative and positive keeping the magnitude the same. After taking the Fast Fourier Transform for half of the values for which there exists a negative counterpart, the final transformation gives a performance improvement.

It is clear when we change our regression target values, we do get a performance improvement but this is notable with Fourier Values.

References

--

--

Aswin Vijayakumar.
Nerd For Tech

Project, technical details and standards for Computer Vision and Data Science. Contact: aswinkvj@klinterai.com.