Logistic Regression with Keras

Wei Lu
2 min readJun 1, 2017

--

Logistic Regression (LR) is a simple yet quite effective method for carrying out binary classification tasks. There are many open source machine learning libraries which you can use to build LR models.

Keras (with Tensorflow as back-end) is a powerful tool for quickly coding up your machine learning modeling efforts. The main use case is to build and deploy deep neural networks. LR models can be viewed as special cases of neural nets (i.e, a single layer model, without any hidden layers), so naturally Keras is well-suited.

Now let us see it in action. As is the case with any machine learning project, remember to clean up and pre-process your data first, and then compute intuitive features for the model to learn.

Sample Python code for doing logistic regression with Keras (2.0+ version). MIT license applies.

The above code builds a single-layer densely connected network. It uses L2 regularization with coefficient 0.1 (but you shall determine the best value using cross validation on your own data). Similarly, you may want to use more or fewer epochs to train, depending on your own situations. Such parameter tuning should be done on a case-by-case basis.

References

(1). The author of Keras provided several examples. These used Keras API prior to the 2.0 version. You may see warnings if you have 2.0+ installed.

(2). This medium blog post showed an example with the MNIST dataset. However, initially I was unable to get the code work as is. Not super sure why, but in any case, a simple fix which puts all input data into numpy arrays worked (as shown in the above code snippet).

--

--

Wei Lu

Interests: Machine Learning, Data Mining, AI, Big Data Systems. Views my own.