How to predict moods of your wife by implementing a single Neuron Network with NumPy

It’s so difficult to understand the human brain (especially the female brain), but all human brains are so good at learning from mistakes (except male brains in some cases), Artifical Neuron Networks can be amazing in this process and can help men to be smarter on predicting women’s moods. Let’s find out if emulating an Artificial Intelligence system can help men understand the women’s mood better.
So let’s say that we’ve made some actions in the past that have influenced our wife’s mood. Those past mistakes and successes could be easily utilized now to predict a percentage of happiness rates when some other combinations of actions are made. This information can be used as train data for a simple Artificial Neural Network. AI neurons like human neurons, learn through the iterations of trial and error and, with some lines of code, I will show you how the same process can be emulated with Python.
Our train data will be: past experiments of wife mood after some actions have been tried out.

The network to implement here will have five inputs (consisting of the combination of actions tried in the past), stored like: an action done, n =NO, y =YES. And one output that tells us if the wife was happy with the actions or not.
Can Artificial Intelligence predict women’s moods?
Maybe we need a lot more data to answer this question, or maybe not, but the scope here is only to building a single-layer neural network and see if it can learn to predict the happiness of a wife when a few certain combo actions are made. It’s just to see if AI can be smarter at learning from mistakes than husbands do.
So, to make this learning process happen, I will convert YES and NO to binary 1 and 0s and let the neural network learn thru the iterations from the inputs and the errors made, and finally give us the output as a rate of how much happy the wife would be at the end. All we need to do this is Python for coding with NumPy library, and if you haven’t already, start by installing the library required: pip install numpy.
Let’s start the code by importing some functions.

The idea is to elaborate a really simple single-neuron network and to do this all we need is some NumPy functions like:
- exp, to calculate e^z for each value of z to use in the Logistic activation function.
- array, to manage arrays.
- random, to generate random weights to a 5 x 1 matrix.
- dot, to easily manage products of arrays.

At the beginning, the network will predict the wrong wife’s mood because it knows nothing about her, and this is ok. That’s why he needs to have a “learning” phase where “backpropagation” of mistakes can train him so he can predict wife’s mood better after every trial.
Emulating a single-neuron network on Python with a learning perspective
A neural network learns by comparing the correct answer (from the training table) and what the output is, and the errors (the difference between what the network said and the correct answer we have stored) is sent backward through the network to adjust each input weight so that the neuron will get closer to the correct answer at the next trial.

The “learning” iterations are repeated over and over until the Neural Network output (a value ranged in this case from 0 to 1) is really close to the real answer (wife is happy? Yes or no, 1 or 0).
This network will have five weights and one output, and I’ll start by initializing random weights for the network so that the neuron can start with some values. The random weights will be then re-elaborated through the process of training by trials and then adjusted back with the errors made by predictions.
I will define the Activation function that triggers between the feeding input and neuron network output. This function passes the weighted inputs and normalize values between [0, 1] (with Sigmoid function). Then, to evaluate the confidence of existing trained weights, I will use the derivative of the activation function.
Now it’s time to train the model. The Training procedure is defined by looping the weights adjusted by the error calculated after every iteration for the predicted output (the difference between the desired output and the feed-forward output) into the input. And multiplying the errors by the inputs and by the gradient of the Sigmoid curve, to have the “less confident weights” to be adjusted for next trial and, assure that those inputs that are equal to zero will not affect weights at all.

Starting, Feeding and Testing the Neural Network
Now that the core coding of the neuron is done, and it’s programmed to proceed with certain inputs and outputs, we can start adding training data and initializing the learning iterations.

The training data inputs this time will be inserted as an array, where I’ve converted y and n with 1 and 0. The output results are converted the same way into one array.

Now let’s train the network for 10,000 iterations while the weights are being adjusted to reduce errors in the predictions, comparing inputs and output results.

The Neural Network is trained and our artificial intelligence has learned how the wife’s moods react (hopefully), and we can test it using some data that we left out from the training set.
Test data
Let’s use the weights of the trained network to predict inputs that were not used to train the network:

[ ]:

[-> output:
Inferring predicting wife happiness from the following actions: 'bring her a boquet of flowers' and 'buy her a gift'
Happy wife? No
Expected output: NoThe first prediction is correct!
[ ]:

[-> output:
Inferring predicting wife happiness from the following actions: 'send a love messege', 'buy her a gift' and 'offer help at home'
Happy wife? Yes
Expected output: YesThe results are all corect, it seems like the neuron works great and now we can make some future predictions on some other combo actions.

[ ]:

[-> output:
Inferring predicting wife happiness rate from the following actions: 'send a love messege', and 'offer help at home'
[0.44302398]The AI system has valued the output like telling us that ther’s a 44% chance wife will be happy about that, and it is not so high, it is a bit risky. It is more near to a “wife happy” NO than to a “wife happy” YES.
Let’s try with something else…
The following actions will make her happy?

[ ]:

[-> output:
Inferring predicting wife happiness rate from the following actions: 'send a love messege', 'date together', and 'offer help at home'
[0.99994244]99.99% sure she will be happy!
Conclusion
It appears that a single artificial neuron is enough on learning how a wife’s mood will act in some situations.
In this post, I‘ve done a little Python coding to emulate a simple neural network to solve a decision problem on what actions to do to make happier a wife. We briefly looked at how it is possible to create artificial intelligence predictions based on past events used to train the artificial neural network by learning on trials and errors, using only a few lines of code. And we have tested and analyzed the outputs to demonstrate that it all worked fine and we have some interesting predictions. This doesn’t mean that all of the wife’s mood has been magically revealed, but it could be a good starting point.
