Congratulations…ML in a nutshell !
xavier leroy

The weights/biases are not the feature maps. Rather, they sit between layers (columns AF-AH in the spreadsheet) and in this example, the inputs are multiplied by the weights (plus a bias), to produce the feature maps. Think of it this way — the weights/biases are like Sherlock’s magnifying glass and the feature maps are the clues you generate. At the start of training, weights/biases are random values (weight initialization) and through training (using gradient descent), the weights/biases are slowly updated to produce better image classification accuracy.