Step by Step Train Model using Tensorflow (CNN)

ade sueb
Analytics Vidhya
Published in
4 min readSep 9, 2020

Step by Step to Train your own Image Dataset for Deep Learning Using Tensorflow

Actually there is an easiest way to train you own Image. You can use Firebase Machine Learning. You only have to upload your images and define the labels. But if you still wanna train a model by your hands, you can continue read this blog.

Anyway.. you can find the full source code and the datasets in here

Prepare the Data Set

Prepare as many as possible sample images. Put them into each folders by the classification/labels.

For this i will use mine from this story. You can use cat and dog or mnist dataset for this.

Load the Data Set

Create features (X) and labels (Y) variable

Create variable X_TRAIN and Y_TRAIN. Both of them as array.

Create variable arrays called labels that contains the name of labels or classifications of your model.

Save the index of variable labels into variable Y_TRAIN. We implicitly encoding that labels into number. So that we can pass it to model.

Load the image folders. Iterate 1 by 1 the files and adding including the index of label name into variable array. For this variable X_TRAIN.

DATADIR = "dataset"
TESTDIR = "test"
LABELS = ["indosiar", "indosiar_iklan", "sctv", "sctv_iklan"]
X_TRAIN = []
Y_TRAIN = []

Augment the data, if you want (optional)

This needed when you don’t have enough dataset. You can do flip or resize the image. So that you adding multiple data.

Make sure the input shape is correct

You have to know what your data shape variable X_TRAIN . Because that shape must be the same with the input layer.

for label in LABELS:
path = os.path.join(DATADIR, label)
class_num = LABELS.index(label)
for img in os.listdir(path):
try:
img_array = cv.imread(os.path.join(path, img))
new_array = cv.resize(img_array, (IMG_SIZE, IMG_SIZE))
X_TRAIN.append(new_array)
Y_TRAIN.append(class_num)
except Exception as e:
pass

Also we need to reshape the X_TRAIN .

X_TRAIN = np.array(X_TRAIN).reshape(-1, IMG_SIZE, IMG_SIZE,3)

First parameter is determine how many features do you have, we put -1 so that we can use number of feature whatever we want. And the last parameter is for whether the dataset is RGB or Gray. 1 for Gray and 3 for RGB.

Build the Model (Sequential)

Input Layer

We can just using the layer what we want for this, and specify the input on parameter. We put the shape of variable X_TRAIN for the input parameter.

model.add(Conv2D(32, (5,5), input_shape = X_TRAIN.shape[1:]))

Hidden Layer

Because we want to train using CNN, so we use Conv2D layer for this.

How many hidden layer do we need? Depends on the training result. We do training not once. we can do many time until we have the right composition (number of hidden layer and parameter that we use to the layer) for the best result.

model.add(Conv2D(32, (5,5)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32, (5,5)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

Output or final Layer

For the output we used Dense Layer. Because we want to convert multi dimensional array into 1D array, so that we used Flatten before Dense Layer.

model.add(Flatten())
model.add(Dense(4))

Also choose the right activation

since our labels are 4, it’s better we used Softmax for the Activation Function.

model.add(Activation("softmax"))

Compile Model

Loss Function

Because our labels is categorical (more than 2 labels) and our final activation function is Softmax, then the right Loss Function is categorical_crossentropy.

Optimizer

For optimizer we used adam instead of SGD. adam faster then SGD , even though SGDis more accurate than adam .

Metrics

This for log, we choose accuracy .

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

Train Model

Passing X and Y train

Just passing X_TRAIN and Y_TRAIN to model.fit at first and second parameter.

Batches

For batches we can use 32 or 10 or whatever do you want. This batches makes the training process faster, specially if you use GPU for your training. This batches for how many number of data process in 1 epoch do at the same time, and this number affect to calculation of accuracy and loss percentage.

Epochs

Every each epochs is 1 training process. And after 1 training normally will calculated with loss function and optimizer. So that after training the model getting better. But if we have too many epochs, then it will cause overfitting.

Validation Data

You don’t have to have validation data. You can just split from training data set, by just specify the validation parameter.

model.fit(X_TRAIN,Y_TRAIN, epochs=10, validation_split=0.1)

Test Model

Use Images that are not contains on training Dataset. And use predict function for this.

for img in os.listdir(TESTDIR):
try:
img_array = cv.imread(os.path.join(path, img))
new_img = cv.resize(img_array, (IMG_SIZE, IMG_SIZE))
new_shape = new_img.reshape(-1, IMG_SIZE, IMG_SIZE, 3)
predictions = model.predict(new_shape)
plt.imshow(new_img)
print(predictions)
print(LABELS[np.argmax(predictions)])
except Exception as e:
pass

Full Source Code

--

--

Analytics Vidhya
Analytics Vidhya

Published in Analytics Vidhya

Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

ade sueb
ade sueb

Written by ade sueb

Still believe, can change the world with code..

No responses yet