Radar Data — Architectures and Ensembling

Shaul Solomon
Gradient Ascent
Published in
14 min readNov 25, 2020

--

This is the 6th article in our MAFAT competition series, where we give an in-depth look at the different aspects of the challenge and our approach to it. Take a look at the posts covering the the introduction, the dataset, augmentations, and visualizing signal data, and streaming pipelines.

You’ve made it this far — Congratulations!

With the data preprocessed, augmented, and pipelined, we are ready to feed it into our Neural Network. But which architecture should we use?

Having applied the Fourier Transformation to the IQ matrix and inserting the doppler burst matrix gave us a very useful spectogram (the logic is identical for the scalograms). If we take the spectogram/scalogram at face value as an image that reflects the movement of the unknown object, applying a CNN model would be the first reasonable architecture “family” to try. (The go-to stratergy in Data Science projects is to start with the simpler models, the “low-hanging fruit”, and work your way up from there.)

For all of the subsequent models we use:
Loss : Binary Cross-Entropy
Optimizer : Adam
Metric: AUC

The Base Model

The first basic model was actually provided for us from the creators of the competition themselves, and it is a relatively simple CNN model with two Convolutional layers and then three Fully-Connected layers. (Written below in Keras/TF)

# Taken from the code given to us
def create_model(input_shape, init):
"""
CNN model.
Arguments:
input_shape -- the shape of our input
init -- the weight initialization
Returns:
CNN model
"""
model = Sequential()
model.add(Conv2D(16, kernel_size=(3, 3), activation='relu', kernel_initializer = init, bias_regularizer='l2', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', kernel_initializer = init, bias_regularizer='l2'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(128, kernel_regularizer = 'l2', activation='relu', kernel_initializer = init))
model.add(Dense(32, kernel_regularizer = 'l2', activation='relu', kernel_initializer = init))
model.add(Dense(1, activation='sigmoid', kernel_initializer = init))
return model

While the model preformed extremely well (.94 AUC) on the validation set, it didn’t do very well on the test data (~.74 AUC), which seemed to imply that it wasn’t learning the more generalized features. In such a case the two main approaches are to either increase the data size (we already exhausted the straight-forward solution to that issue) or to make a more complex model.

Small Alex-Net

While there are many pretrained models for image classification, we didn’t want to take a pretrained model (because our style of image is different that the classic CIFAR-10 dataset) and we wanted to not jump to the largest models (Resnet/SENet) due to the principal mentioned above about slowly building up complexity and because due to the limited dataset we could not train on too large an architecture.

Our first choice would be the smallest established image-classication model, AlexNet, but even then wanted to take a smaller version. And so we took the same exact architecture structure but just halved the size of the neurons in each layer. (Written in Pytorch)

The only real change that needed to be made was to change the final layer from classifying 1000 classes, to just two. As we are also using BCE, the final activation is the sigmoid function, to get the final score between [0,1].

----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 62, 15] 1,600
ReLU-2 [-1, 32, 62, 15] 0
MaxPool2d-3 [-1, 32, 31, 8] 0
Conv2d-4 [-1, 128, 31, 8] 102,528
ReLU-5 [-1, 128, 31, 8] 0
MaxPool2d-6 [-1, 128, 16, 4] 0
Conv2d-7 [-1, 256, 9, 3] 295,168
ReLU-8 [-1, 256, 9, 3] 0
MaxPool2d-9 [-1, 256, 5, 2] 0
Conv2d-10 [-1, 128, 4, 2] 295,040
ReLU-11 [-1, 128, 4, 2] 0
MaxPool2d-12 [-1, 128, 2, 1] 0
Conv2d-13 [-1, 128, 2, 2] 147,584
ReLU-14 [-1, 128, 2, 2] 0
MaxPool2d-15 [-1, 128, 1, 1] 0
AdaptiveAvgPool2d-16 [-1, 128, 6, 6] 0
Dropout-17 [-1, 4608] 0
Linear-18 [-1, 4096] 18,878,464
ReLU-19 [-1, 4096] 0
Dropout-20 [-1, 4096] 0
Linear-21 [-1, 4096] 16,781,312
ReLU-22 [-1, 4096] 0
Linear-23 [-1, 1] 4,097
================================================================
Total params: 36,505,793
Trainable params: 36,505,793
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 1.44
Params size (MB): 139.26
Estimated Total Size (MB): 140.71
----------------------------------------------------------------

While the val score was lower, the final test AUC increased to ~ 0.769 — SUCCESS.

Alex Net

Seeing the increasement we decided to try and test the data on the regular Alex Net:

class alex_mdf_model(nn.Module):def __init__(self):
super(alex_mdf_model, self).__init__()
self.arch = alexnet(pretrained=False)
self.arch.classifier[-1] = nn.Linear(4096,1)
for i,layer in enumerate(self.arch.features.children()):
if "MaxPool" in str(layer):
self.arch.features[i] = nn.MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
def forward(self,x):
x = x.permute(3,1,2,0)
x = x.repeat(3,1,1,1)
x = x.permute(3,0,1,2)
x = self.arch(x)
return torch.sigmoid(x)
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 30, 7] 23,296
ReLU-2 [-1, 64, 30, 7] 0
MaxPool2d-3 [-1, 64, 15, 4] 0
Conv2d-4 [-1, 192, 15, 4] 307,392
ReLU-5 [-1, 192, 15, 4] 0
MaxPool2d-6 [-1, 192, 8, 2] 0
Conv2d-7 [-1, 384, 8, 2] 663,936
ReLU-8 [-1, 384, 8, 2] 0
Conv2d-9 [-1, 256, 8, 2] 884,992
ReLU-10 [-1, 256, 8, 2] 0
Conv2d-11 [-1, 256, 8, 2] 590,080
ReLU-12 [-1, 256, 8, 2] 0
MaxPool2d-13 [-1, 256, 4, 1] 0
AdaptiveAvgPool2d-14 [-1, 256, 6, 6] 0
Dropout-15 [-1, 9216] 0
Linear-16 [-1, 4096] 37,752,832
ReLU-17 [-1, 4096] 0
Dropout-18 [-1, 4096] 0
Linear-19 [-1, 4096] 16,781,312
ReLU-20 [-1, 4096] 0
Linear-21 [-1, 1] 4,097
AlexNet-22 [-1, 1] 0
================================================================
Total params: 57,007,937
Trainable params: 57,007,937
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 0.96
Params size (MB): 217.47
Estimated Total Size (MB): 218.44
----------------------------------------------------------------

Running it gave us 0.799 AUC.

While we were making progress with the CNN’s they had an inherent issue that we hoped a TCN model would help.

Temporal Convolutional Networks

Before we get into what Temporal Convolutional Networks are, it seems important to stress why there was a need to evolve between the classic CNN architecture.

Very simply put, while addressing our spectogram as an image was a good approximator to extract crucial information, our spectogram was also a reflection of time-series data, which CNN’s are not able to capture.

Classically, for time-series data we would use an RNN but do to this being both a form of an image problem (information is spatialy temporal) as well as being a time series, we would need to do a Seq-to-Seq of a CNN model and then an RNN.

We wanted to avoid heading there (at least initially) because RNN’s alone are much harder to train than CNN’s, and require a lot more data — and in our case we would need to create a seq-to-seq model, combining the output of a CNN into a RNN.

So instead, we wanted to mimic the similar form of results that we would get from the CNN + RNN model with a TCN.

TCN Architecture

Lea et al. 2016

From the highest level view of a TCN — it is a CNN model that has a casual convolution layer (A 1D Conv. Layer) appended to the end that is intended to mimic an RNN, by computing the value of a neuron based off the previous neurons values.

In order to better capture the time-sensitive information, the TCN incorporates two techniques:

  1. The Convolutions are causal, as in they only able to look back in time (no “leakage”). As can be seen in the description of the image, convolutions can only take in information from “older” previous states.
  2. They use Diluted Convolutional Layers. The dilution heuristic is a way to capture the neccesary information with less parameters. In the TCN, we create a series of Blocks, each with an increasing number of dilation. Similar to the base idea of applying a convolution to a convolution, thereby increasing the scope of temporal relevance with much less parameters ( two 3x3 conv filters = 18 parameters vs. 1 9x9 conv filter = 81 parameters), by increasing the dilation, we can cover a much wider temporal field.
Multi-Scale Context Aggregation by Dilated Convolutions 2016

However, we could not just take the out-of-the-box TCN model for a few main reasons:

  1. The original model was created for video segmentation, which means that it outputed an equal number of neurons as was given as an input.

To resolve this we applied a final 1-D conv layer on the final output to give us a single output neuron.

  1. While dilations are good heuristics in general, in our specific case we wanted to increase dilation only on the time-axis. This would allow us to both be able to explore larger time-scales using less parameters while ensuring that at each time-step the model had access to all of the data to make sure that it would pick up the possible important information.

Our code was rewritten based off the code from the locuslab github repo (found here):

# Because we want the model to reflect time-series data we want to mask 
# any information past the current time-stamp.
class Chomp2d(nn.Module):
def __init__(self, chomp_size):
super(Chomp2d, self).__init__()
self.chomp_size = chomp_size
def forward(self, x):
return x[:, :, :, :-self.chomp_size].contiguous()
# The TCN is made of several temporal blocks.
# Each block implements two convolutioal layer with dilation and then
# Incorporates Residual learning
class TemporalBlock(nn.Module):
def __init__(self, n_inputs, n_outputs, kernel_size, stride, dilation, padding = -1, dropout=0.2):
super(TemporalBlock, self).__init__()
padding = (kernel_size-1) * dilation if padding == -1 else padding
n_outputs = n_outputs[0] if type(n_outputs) == list else n_outputs
self.conv1 = weight_norm(nn.Conv2d(n_inputs, n_outputs, kernel_size,
stride=stride, padding=(kernel_size//2,padding), dilation=(1,dilation)))
self.chomp1 = Chomp2d(padding)
self.relu1 = nn.ReLU()
self.dropout1 = nn.Dropout(dropout)
self.conv2 = weight_norm(nn.Conv2d(n_outputs, n_outputs, kernel_size,
stride=stride, padding=(kernel_size//2,padding), dilation=(1,dilation)))
self.chomp2 = Chomp2d(padding)
self.relu2 = nn.ReLU()
self.dropout2 = nn.Dropout(dropout)
self.net = nn.Sequential(self.conv1, self.chomp1, self.relu1, self.dropout1,
self.conv2, self.chomp2, self.relu2, self.dropout2)
self.downsample = nn.Conv2d(n_inputs, n_outputs, 1) if n_inputs != n_outputs else None
self.relu = nn.ReLU()
self.init_weights()
def init_weights(self):
self.conv1.weight.data.normal_(0, 0.01)
self.conv2.weight.data.normal_(0, 0.01)
if self.downsample is not None:
self.downsample.weight.data.normal_(0, 0.01)
def forward(self, x):
out = self.net(x)
res = x if self.downsample is None else self.downsample(x)
return self.relu(out + res)
# Combined several Temporal Blocks, increasing the dilation size (2^i)
# The output is then put though a final convolutional layer and
# a Fully Connected layer giving us one logit.
class TemporalConvNet(nn.Module):
def __init__(self, num_inputs, num_channels, kernel_size=2, dropout=0.2):
super(TemporalConvNet, self).__init__()
layers = []
num_levels = len(num_channels)
for i in range(num_levels):
dilation_size = 2 ** i
in_channels = num_inputs if i == 0 else num_channels[i-1]
out_channels = num_channels[i]
layers += [TemporalBlock(in_channels, out_channels, kernel_size, stride=1, dilation=dilation_size,
padding=(kernel_size-1) * dilation_size, dropout=dropout)]
self.network = nn.Sequential(*layers)
self.singlechannel = nn.Conv2d(num_channels[-1],1,1)
self.dropout = nn.Dropout(dropout)
self.decoder = nn.Linear(126*32, 1)
def forward(self, x):
x = x.permute(0,3,1,2)
print(x.shape)
x = self.network(x)
x = self.dropout(self.singlechannel(x)).flatten(start_dim=1)
return F.sigmoid(self.decoder(x))
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 16, 126, 36] 416
Conv2d-2 [-1, 16, 126, 36] 416
Chomp2d-3 [-1, 16, 126, 32] 0
Chomp2d-4 [-1, 16, 126, 32] 0
ReLU-5 [-1, 16, 126, 32] 0
ReLU-6 [-1, 16, 126, 32] 0
Dropout-7 [-1, 16, 126, 32] 0
Dropout-8 [-1, 16, 126, 32] 0
Conv2d-9 [-1, 16, 126, 36] 6,416
Conv2d-10 [-1, 16, 126, 36] 6,416
Chomp2d-11 [-1, 16, 126, 32] 0
Chomp2d-12 [-1, 16, 126, 32] 0
ReLU-13 [-1, 16, 126, 32] 0
ReLU-14 [-1, 16, 126, 32] 0
Dropout-15 [-1, 16, 126, 32] 0
Dropout-16 [-1, 16, 126, 32] 0
Conv2d-17 [-1, 16, 126, 32] 32
ReLU-18 [-1, 16, 126, 32] 0
TemporalBlock-19 [-1, 16, 126, 32] 0
Conv2d-20 [-1, 32, 126, 40] 12,832
Conv2d-21 [-1, 32, 126, 40] 12,832
Chomp2d-22 [-1, 32, 126, 32] 0
Chomp2d-23 [-1, 32, 126, 32] 0
ReLU-24 [-1, 32, 126, 32] 0
ReLU-25 [-1, 32, 126, 32] 0
Dropout-26 [-1, 32, 126, 32] 0
Dropout-27 [-1, 32, 126, 32] 0
Conv2d-28 [-1, 32, 126, 40] 25,632
Conv2d-29 [-1, 32, 126, 40] 25,632
Chomp2d-30 [-1, 32, 126, 32] 0
Chomp2d-31 [-1, 32, 126, 32] 0
ReLU-32 [-1, 32, 126, 32] 0
ReLU-33 [-1, 32, 126, 32] 0
Dropout-34 [-1, 32, 126, 32] 0
Dropout-35 [-1, 32, 126, 32] 0
Conv2d-36 [-1, 32, 126, 32] 544
ReLU-37 [-1, 32, 126, 32] 0
TemporalBlock-38 [-1, 32, 126, 32] 0
Conv2d-39 [-1, 32, 126, 48] 25,632
Conv2d-40 [-1, 32, 126, 48] 25,632
Chomp2d-41 [-1, 32, 126, 32] 0
Chomp2d-42 [-1, 32, 126, 32] 0
ReLU-43 [-1, 32, 126, 32] 0
ReLU-44 [-1, 32, 126, 32] 0
Dropout-45 [-1, 32, 126, 32] 0
Dropout-46 [-1, 32, 126, 32] 0
Conv2d-47 [-1, 32, 126, 48] 25,632
Conv2d-48 [-1, 32, 126, 48] 25,632
Chomp2d-49 [-1, 32, 126, 32] 0
Chomp2d-50 [-1, 32, 126, 32] 0
ReLU-51 [-1, 32, 126, 32] 0
ReLU-52 [-1, 32, 126, 32] 0
Dropout-53 [-1, 32, 126, 32] 0
Dropout-54 [-1, 32, 126, 32] 0
ReLU-55 [-1, 32, 126, 32] 0
TemporalBlock-56 [-1, 32, 126, 32] 0
Conv2d-57 [-1, 64, 126, 64] 51,264
Conv2d-58 [-1, 64, 126, 64] 51,264
Chomp2d-59 [-1, 64, 126, 32] 0
Chomp2d-60 [-1, 64, 126, 32] 0
ReLU-61 [-1, 64, 126, 32] 0
ReLU-62 [-1, 64, 126, 32] 0
Dropout-63 [-1, 64, 126, 32] 0
Dropout-64 [-1, 64, 126, 32] 0
Conv2d-65 [-1, 64, 126, 64] 102,464
Conv2d-66 [-1, 64, 126, 64] 102,464
Chomp2d-67 [-1, 64, 126, 32] 0
Chomp2d-68 [-1, 64, 126, 32] 0
ReLU-69 [-1, 64, 126, 32] 0
ReLU-70 [-1, 64, 126, 32] 0
Dropout-71 [-1, 64, 126, 32] 0
Dropout-72 [-1, 64, 126, 32] 0
Conv2d-73 [-1, 64, 126, 32] 2,112
ReLU-74 [-1, 64, 126, 32] 0
TemporalBlock-75 [-1, 64, 126, 32] 0
Conv2d-76 [-1, 1, 126, 32] 65
Dropout-77 [-1, 1, 126, 32] 0
Linear-78 [-1, 1] 4,033
================================================================
Total params: 507,362
Trainable params: 507,362
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 94.32
Params size (MB): 1.94
Estimated Total Size (MB): 96.27
----------------------------------------------------------------

While in all Neural Network models that are hyper-parameters that need to be tuned — crucial to the TCN one must decide the kernel size, and the number of layers.

For a TCN with n residual blocks we will have a receptive of field of

So for us it would be five layers with kernel_size 3, or four layers at kernel_size 5.

We chose four layers [16,32,32,64] with kernel_size 5.

While it took much longer to train than the CNN models (it is much deeper) it did exceedingly well on the Train/Val data — with .9912 and .9566 AUC scores respectively.

However, on the Test data it got a score of .7874 — similar to the CNN model.

It seemed that for the time being, the model was complex enough and that further improvements to the score would have to come from other directions.

However as discussed in the Ensemble section — we ultimately used both the CNN and TCN together to improve our final score.

To read more about Temporal Convolutional Networks: https://medium.com/@raushan2807/temporal-convolutional-networks-bfea16e6d7d2 https://medium.com/the-artificial-impostor/notes-understanding-tensorflow-part-3-7f6633fcc7c7

Ensembling methods

Close your eyes for a moment and imagine the scenario:
The competition deadline is only one day away, and we have four somewhat equally succesful models to choose from — how do we pick which one to use?

Well we don’t — we want to take all of them.

One of the more classic techniques that all winners of kaggle / data science competitons use is an ensembling method to combine the predictive powers of the various models.

There are actually many different way to combine models, divided into two main categories:

Taken from https://howtolearnmachinelearning.com/articles/boosting-in-machine-learning/

Bagging

The basic intuition is that certain models will be accurate at different areas of the data, so getting a collective of opinions and then weighing them will help you get the best of all the options.

However, unlike a democracy, we don’t want all predictions to have same significance (weight). We would like to give more weight to the more accurate models. While the methodolgy to choose how much weight to assign can vary, they are all doing the same basic task — combining the outputs of several models together.

Boosting

Boosting takes the basic intuition and raises it one level. Instead of training each model in parallel, what if we can train Model B to be sensitive to correctly labeling the examples where Model A was unsuccessful. As such, each new model is sequentially trained not just off the data but off the needs of the previous models.

For a more lengthy explanation on the various ensemble methods, check out this great article: https://towardsdatascience.com/ensemble-methods-bagging-boosting-and-stacking-c9214a10a205

The Final Stretch

While boosting had been used in combination with Neural Networks (Schwenk, Bengio), we already had the trained models and the clock was ticking.

We decided to try three types of bagging methods.

We first took the public test data as our validation, and wanted to see how the various methods would perform.

We decided to try three types of bagging methods.

We first took the public test data as our validation, and wanted to see how the various methods would perform.

  1. Individual Metrics

We wanted to run each of the models independently to see what score they would get on the validation dataset as a baseline.
The models all hovered around .77 AUC Score on the public test set.

2. Arithmetic Mean

We took the arithmetic mean of each of the y_predictions. As each of the three models predicted between [0,1] our arithmetic mean was also between [0,1].

The Arithmetic Mean brought out val score to .80 AUC Score — a great improvement with such little work.

3. Weighted Mean

Taking the weighted mean based off their inidividual scores would give more weight to better performing models. In our case, each of the models scored similarily, so the weighted mean accuracy was almost identical to the arithmetic mean accuracy.

4. Logistic Regression

We wanted to see if there was a better linear relationship between the model scores, not just based off of their individual scores, so we trained a Logistic Regression model (we took our val data and split it into two: val_train, and val_test). As Logistic Regression is already bounded between [0,1].

Using Logistic Regression we raised our score to .83 AUC Score on the public test set!

(Usually when trying to compare methods against each other, you need to run them on the exact same dataset, but for the Logistic Regression model, we had to split the data into it’s own train/val, thereby not being perfectly consistent with the other models.

However, because we were able to submit two final predictions, we took the output of the Logistic Regression and the weighted mean independently as our two submissions.)

Summary

  1. Always start with a simple model and slowly increase complexity.
  2. General heuristics are a good place to start (looking at the spectogram/scalogram as an image).
  3. Follow the wisdom of the crowds! Even taking simpler models and combining them with more complex models will very likely produce much better result that any model alone.
  4. Often enough, model complexity isn’t enough. Due to the time constraints we weren’t able to spend more time on dealing with label imbalance or the lowSNR, but if had more time, that is where we would invest our effort.

A hearty Mazel Tov for getting through all six articles!
You are ready to get out there and experiment with your own Radar data.

We hope you found these articles helpful and if in your own exploration you found something nifty, please share.

--

--

Shaul Solomon
Gradient Ascent

01101000 01110101 01101101 01100001 01101110 — Aspiring Data Science interested in all matters of expression, self and synthetic.