An Easy Introduction to LightOnML

High quality libraries providing simple API to run computations on optical accelerators allow researchers to iterate faster and let their creativity run wild. Picture of an optical system in a research lab by David Stillman.

In this post I am going to show you that you just need to know a little Numpy to use a LightOn OPU. I will introduce LightOnML, the Python framework to perform fast large-scale random projections with light. By the end, you will have the knowledge needed to take advantage of LightOn OPUs right away when you access LightOn Cloud.

Some years ago, I thought you needed to be some kind of wizard to run computations on graphics cards (GPUs). That was before I discovered Theano (RIP 😢), TensorFlow, Torch (RIP 😢) and PyTorch, and quickly realized using GPUs could be easy.

In the same way, you may think doing Optical Computing is complicated, and you need a degree in Physics to really understand what is going on. Maybe you associate it with Quantum Computing and weird names, like Bloch sphere.

Without further ado, let me show you how it works 👇

Fig. 1: Three thousand 10⁶×10⁶ random projections with five lines of code. Actually, three if you don’t count the imports. OK, two if you don’t count data generation. Could it get any easier than this?

The code in Figure 1 shows the essence of LightOnML. What is happening there? We performed y = |Rx|², where x is a 3.000×1.000.000 matrix and R is a 1.000.000×1.000.000 complex Gaussian matrix. The matrix R would be about 8 TB in single precision!

  1. Import numpy to create our input data, and the OPUMap class from lightonml.projections.sklearn.
  2. Create some “dummy” data for demonstration purposes. The input of current LightOn OPUs must be binary and uint8.
  3. Create an OPUMap object that is responsible for talking to the hardware accelerator, and select the output size of the projection n_components .
  4. Perform the optical transformation using the opu.transform method. The output is an array of uint8, with elements in [0, 255].

It all boils down to sending and retrieving numpy.ndarrays. Actually, that works with torch.Tensors too! Similarly to running GPU computations without any idea about warps, lanes, registers and threads, it is possible to run computations with LightOn OPUs having no idea about Maxwell’s equations.

Fig. 2: A convolutional autoencoder can be quickly trained to perform data-adaptive binarization for some tasks (picture courtesy of A. Cappelli).

Nice! But what is that thing about a binary input? Indeed, the input for an OPU needs to be binary, we provide some fixed and data-adaptive encoders in lightonml.encoding.base and lightonml.encoding.models. Alternatively, you can devise your own encoding scheme.

Some other nice features of the library:

  • The OPUMap scikit-learn wrapper can be placed in Pipeline and it will play just fine with cross-validation tools.
  • The low-level code performs an automatic optimization for the output Signal-to-noise-Ratio.
  • It is possible to input bit-packed data.

There are other interesting nuggets in the library, but we don’t want to take away all the fun from you, do we?

Sparked your curiosity and want to know more about LightOnML features? Browse the library documentation.

Want to discover real applications of this library and research work by LightOn? Check our blog posts on Reinforcement Learning, Transfer Learning, the Double Descent Curve and more... and their associated lightonai GitHub repositories.

Want to use this library yourself? Register to the LightOn Cloud or apply to the LightOn Cloud for Research Program.

About Us

LightOn is a hardware company that develops new optical processors that considerably speed up Machine Learning computation. LightOn’s processors open new horizons in computing and engineering fields that are facing computational limits. Interested in speeding your computations up? Try out our solution on LightOn Cloud! 🌈

Follow us on Twitter at @LightOnIO, subscribe to our newsletter and/or register to our workshop series. We live stream, so you can join from anywhere. 🌍

The author

Iacopo Poli, Lead Machine Learning Engineer at LightOn AI Research.


Thanks to Victoire Louis for reviewing this blog post.




We are a technology company developing Optical Computing for Machine Learning. Our tech harvests Computation from Nature, We are at

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Turing machines in Python

Creating Java classes from JSON dynamically.


Configuring K8s Cluster using Ansible role in AWS

Maven Sonar — settings.xml

[1080p’720p] ▷ 行骗天下JP 完整版本 [The Confidence Man JP: Princess 2020] 在线流高清

How to Research Your Competition by Scraping Yelp Pages

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


We are a technology company developing Optical Computing for Machine Learning. Our tech harvests Computation from Nature, We are at

More from Medium

How to calculate standard errors for estimated parameters for a 3-parameter Weibull Distribution?

Understand matplotlib pyplot in minutes

Understanding the Backpropagation Algorithm

Can you guess which jokes were invented by the computer?