How recent neuroscience research points the way towards defeating adversarial examples and achieving a more resilient, consistent and flexible form of artificial intelligence

Painting by the author Javier

In the year 2031 there is a new Alexa in town, and artificial general intelligence is one of its features. What is Lucy made of?

Video by the author Javier Ideami @

Lucy anticipates your needs and concerns all throughout the day, and is there to share with you the good times and comfort you through the hard ones. She was born from the next revolution that happened after the deep learning one. And what is Lucy made of?

Let’s imagine that we are in that year, 2031 (a symbolic number, as Lucy may not be feasible until many years after that date), and let’s entertain a variety of hypotheses about what kind of substrate Lucy may have. Let’s do it!

Image by the author Javier Ideami @

Extrapolation and analogies

AGI, artificial general…

Dive into the salty ocean of the brain and get closer to the entities that inspire our A.I systems and make your thoughts possible. Explore how understanding the difference between artificial and biological neurons may give us clues about how to move towards a more flexible kind of artificial intelligence.

“Sounds of a million souls”, 2 minute artistic tribute to our cortical columns and the billions of neurons in the neocortex (8K quality). Set the youtube settings to 4K or 8K resolution + full screen for the best experience. “And as we approach the magnificent column, the mysterious pattern calling us from afar with the sounds of a million souls… I sense that the brightest sun is compressed in those tiny specks of wonder.. reduced to a tapestry of dreams that resonate in our consciousness.. And I hear you laugh.. I hear you fall… I hear your tears devastate the…

Create a convolutional layer from scratch in python, hack its weights with custom kernels, and verify that its results match what pytorch produces

Photo by Meriç Dağlı on Unsplash

If you are starting to work with convolutional layers in deep learning you may be confused at times with the mix of parameters, computations and channels involved. From stripe to padding, input and output channels, kernels and learnable parameters, there is a lot going on. In this article, we are going to go deep till the very bottom of what goes on within these conv layers. We will:

  • Code from scratch in python a convolutional layer to understand bit by bit what is going on when we pass data through one of these layers.
  • Hack the parameters of the convnet…

Dive into transformers training & inference computations through a single visual

Link to a higher resolution version of the full infographic is available at the end of this article. Visualization by Javier

The transformer architecture has produced a revolution in the NLP field and in deep learning. A multitude of applications are benefiting from the capacity of these models to process sequences in parallel while achieving a deeper understanding of their context through the attention mechanisms they implement. And GPT-3 is a hot topic right now in the deep learning community.

Understanding how the transformer processes sequences can be a bit challenging at first. When tackling a complex model, many people like to study how the computations of the model change the shapes of the tensors that travel through it.

To that…

Explore visualizations of deep learning loss landscapes, the blessing of dimensionality and other visuals including GANs & Geometric DL

Loss landscape visualizations by Javier Ideami |

In this article, you will:

  • Understand how the loss landscapes of neural networks can be visualized.
  • Explore high resolution visualizations (both static and in movement) of loss landscapes using real data captured from a variety of networks, from convolutional nets to GANs.
  • Explore ways to analyze these visualizations with examples.
  • Explore other visualizations that go into Geometric deep learning and Bayesian deep learning.
  • Explore the “blessing of dimensionality” concept.
Visualizing the loss landscape. Infographic by Javier Ideami |

Optimization and life. It’s all about movement

“Life requires Movement” — (Aristotle, 4th century BC)

“Life is movement. The more life there is, the more flexibility there is. The more…

Explore the concept of a GAN network by travelling back in time to the renaissance, as Leonardo faces one of his most surprising challenges

Leonardo slammed his fist on the table.
— Outrageous! Porca Miseria!

The courtiers of the king and other royalty contemplated the scene from the sides of the great room. Clara, his cousin and confidant, tried to comfort him.
— What happened Maestro? What’s the matter?

Leonardo hesitated for a moment. Then sighed and raised his eyes towards Clara. Suddenly, his index finger made a massive leap that took Clara’s gaze all the way to the other end of the sumptuous room.

— Antonio? What’s the matter with your apprentice? I thought you were so very delighted, and..

Everyone whispered hurriedly…

In the final part of this series, we predict malignancy in breast cancer tumors using the network we coded from scratch.

In part 1 of this series, we understood in depth the architecture of our neural network. In part 2, we built it using Python. We also understood in depth back-propagation and the gradient descent optimization algorithm.

In the final part 3, we will use the Wisconsin Cancer data-set. We will learn to prepare our data, run it through our network and analyze the results.

It’s time to explore the loss landscape of our network.

Navigating the Loss Landscape within deep learning training processes. Variations include: Std SGD, LR annealing, large LR or SGD+momentum. Loss values modified & scaled to facilitate…

In the second part of this series: code from scratch a neural network. Use it to predict malignant breast cancer tumors

In part 1 of this article, we understood the architecture of our 2 layer neural network. Now it’s time to build it! In parallel, we will explore and understand in depth the foundations of deep learning, back-propagation and the gradient descent optimization algorithm.

Navigating the Loss Landscape within deep learning training processes. Variations include: Std SGD, LR annealing, large LR or SGD+momentum. Loss values modified & scaled to facilitate visual contrast. Visuals by Javier

First things first

First things first, if you want to try your own coding alongside this article, one option is to use Jupyter notebooks. They facilitate enormously working…

Predict malignancy in cancer tumors with a neural network. Build it from scratch in Python.

In this 3 part article you are going to:

  • Create a neural network from scratch in Python. Train it using the gradient descent algorithm.
  • Apply that basic network to The Wisconsin Cancer Data-set. Predict if a tumor is benign or malignant, based on 9 different features.
  • Explore deeply how back-propagation and gradient descent work.
  • Review the basics and explore advanced concepts. In part 1 we explore the architecture of our network. In part 2, we code it in Python and go deep into back-prop & gradient descent. In part 3, we apply it to the Wisconsin Cancer Data-set.

Let’s go…

Javier Ideami

A multidisciplinary engineer, researcher, creative director, artist and entrepreneur, from augmented reality to deep learning, filmmaking, 3D and beyond.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store