The way we teach and learn convolution is convoluted :)
Remember how we learnt it? If x[n] in the input to an LTI system with impulse response h[n], the output y[n] is given by convolution operation:
y[n] = ∑ₖ h[k] x[n-k]
Gee, nice equation, but what does it mean, we asked. And we got the visual flip, slide and multiply accumulate!
Say I give you two sequences:
x1 = [1, 2, 1, 0, 0, 2, 1, 2]
x2 = [3, 0, 0, 3, 0, 0, 1]
And I ask you for the convolution output. What would you do? Especially if you don’t have Matlab or Octave or Python. Would you evaluate the y[n] = ∑ₖ h[k] x[n-k] equation for the above two sequences? …
Experimental Machine learning is turning out to be so much fun! After my investigations on replacing some signal processing algorithms with deep neural network, which for the interested reader has been documented in the article “Machine Learning and Signal Processing”, I got around to trying the other two famous neural network architectures: LSTM and CNN.
Before we get into the details of my comparison, here is an introduction to, or rather, my understanding of the other neural network architectures. We all understand deep neural network, which are simply a set of neurons per layer interconnected sequentially to another set of neurons in the next layer and so on. …
The cellular story started in the early 90s when the Global System for Mobile Communications (GSM) became the standard for wireless communication. …