How to deploy your Neural Network Model using Ktrain

Anurag Bhatt
Analytics Vidhya
Published in
3 min readJan 6, 2020

--

Photo by Mauro Sbicego on Unsplash

Note: I am assuming you know how to train the model in ktrain but there is a basic notebook that also shows how to use ktrain and I have run the notebook in the below libraries versions. You will find the article source code and references at the end of the article.

ktrain==0.26.4
tensorflow==2.5.0

This Article is updated on 9 July 2021.

When I was training my deep learning models in google colab and doing sentiment analysis. I found an interesting article Explainable AI in Practice and when I read it I felt shocked at how cool it was. How we can train our deep learning model in four lines and the various approaches is used for text classification like BERT, which is a state-of-the-art class model, and fasttext and there were many more techniques all included in ktrain. ktrain is a lightweight wrapper for Keras to train neural nets which gave me very surprising results and could also be customized as needed. So, let’s do the deployment.

Step 1: You have to train your model using learner variables like learner.autofit() and preproc which is for preprocessing the text. So, what you have to do is call the get_predictior function in ktrain and save the model using the save function.

predictor = ktrain.get_predictor(learner.model, preproc)
predictor.save('spam_text_message')
print('MODEL SAVED')

Step 2: Now, I have got a folder named spam_text_message which has two files tf_model.h5 and tf_model.preproc. Now, we can load the model for predictions.

import pickle
from tensorflow.keras.models import load_model
# loading preprocess and model file
features = pickle.load(open('spam_text_message/tf_model.preproc',
'rb'))
new_model = load_model('spam_text_message/tf_model.h5')
labels = ['ham', 'spam']

From the above code, we have loaded tf_model.preproc file in features variable and tf_model.h5 in new_model variable and labels columns names which is one-hot encoded.

Step 3: Now, let’s do the prediction but before it, we have to preprocess the text. So, we will use the features variable.

text = 'hey i am spam'
preproc_text = features.preprocess([text])

I have called the preprocess function in the features variable and saved the preprocess text in the preproc_text variable.

Step 4: Now, we can do the prediction.

result = new_model.predict(preproc_text)# OUTPUT =>array([[9.9999797e-01, 2.0015173e-06]], dtype=float32)

After doing the prediction we will get the ndarray. So, we will convert the array to our labels and you can use the array values as scores if you want.

Step 5: I have our list of labels which is one-hot encoded.

label = labels[result[0].argmax(axis=0)]
score = ('{:.2f}'.format(round(np.max(result[0]), 2)*100))
print('LABEL :', label, 'SCORE :', score)
# OUTPUT => LABEL : ham SCORE : 100.00

If you have any doubts and suggestions feel free to comment I will try to solve them as soon as possible.

Thanks and have a good day.

References: For more information on ktrain, see the tutorial notebooks on ktrain.

Source code : ktrain-deployment-text-classification.ipynb

--

--