Here, there and everywhere: the promises of optical computing
In our previous episode, we highlighted the need for new hardware paradigms that can scale to high-dimensional data on a tight energy budget. Among the candidates able to fulfill these demanding requirements, photonics/optical computing stands out.
By leveraging the fundamental laws of nature— Fermat’s principles, interferences, etc. — it is possible to encode complex operations in optical systems. Those systems exhibit coveted properties:
- Massive parallelism: light beams can be superimposed without interacting with one another. Thus, parallelization is almost a free lunch.
- XXL-bandwidths: light-based systems have theoretical bandwidth limits in the hundreds of THz range, while classical transistors are limited to the GHz range.
- Trouble-free interconnects: standard electronics require information-carrying lines to be charged. Optical lines, however, are passive; thereby simplifying engineering constraints.
More importantly, the physics of these optical systems can be matched to specific applications. For instance, a lens will naturally perform a Fourier transform, a mainstay of data science— at the speed of light. Data throughput is then only limited by the shaping and recording of the light field.
In the 60s-70s, pushed by these promises, photonics entered a golden age. Then, embedded electronic systems struggled to perform large computations. Accordingly, optical computing prevailed for demanding real-time applications: from processing synthetic-aperture radar on remote sensing satellites  to optical correlators for advanced pattern recognition applications (Figure 1).
This multitude of possibilities led researchers to pursue photonics as the basis for a general-purpose computing stack. In stark contrast to this prospect, photonics is currently confined to a few domain-specific applications, such as communications. Where did we go wrong?
As it turned out, building a programmable digital optical computer was — and remains — a daunting task. To understand why, we must explore what optically implementing logic rules entails.
One Logic to bring them all and in the darkness bind them
For a new computing paradigm to go mainstream, it needs to scale to a large variety of tasks. The proven way to do so is to implement basic logic operations. Indeed, all modern chipsets are but an intricate ensemble of NAND gates, allowing for the emulation of any logic rule.
However, basic logic functionality is only the tip of the iceberg. As billions of transistors have to work in unison, additional demanding requirements have to be fulfilled :
- Cascadability: the output of one processing stage (e.g. an optical transistor implementing a NAND gate) must be able to drive the input of subsequent stages;
- Resilience: degradations in signal quality should not propagate through the system, allowing for a logical state to be recovered at all time;
- Consistency: individual components of the system should not require precise tuning.
While these requirements are trivial in electronics, no existing light-based technology can meet all of them. In particular, finding a simple, reproducible optical non-linearity — the basis of any logic gate — remains a challenge.
Meanwhile, as photonics plateaued, the field of electronics went through 60 years of iterative improvements. Future transistors are slated to change state with as little as tens of attojoules . To be competitive with current electronics, an optical transistor would need to perform logical operations with only a few hundred photons, a difficult feat.
Thus, entrapped by the lure of transistor-based computing, and unable to match the performances brought forth by the VLSI era, photonics eventually entered a lasting winter (Figure 2).
Silicon photonics: the promised savior of optical computing?
Silicon photonics and machine learning: the square peg in a round hole
Silicon photonics has been hailed as the new coming of optical processing. Leveraging the manufacturing infrastructure used for integrated circuits, photonics circuitry can seamlessly interact with its electronic counterpart (Figure 3). Communication and sensing applications have emerged: from 100 Gbits/s individual fibers for data centers , to silicon photonics powered LiDARs . These renewed promises have motivated the big players to finance extensive research in silicon photonics.
Conversely, silicon photonics has not yet yielded a silver bullet for optical computing. Perhaps, it is so because foreseeing the demands of the computing market is difficult, thus making it a challenge to develop fitting technologies in advance. As illustrated in the — now unfitting — introduction of Optical Computing Hardware published in 1994:
High-performance computing and high-throughput photonic switching […] require the rise of fast logic device arrays, for which semi-conductor-based devices are currently available. Devices for other areas of optical computing such as neural networks, […] do not require high-speed operations.
This statement does not reflect modern machine learning workflows. Following trends in data availability — computer vision has gone from stamp-sized greyscale images to rich megapixel pictures — neural networks have grown massive. Not only do they require high-speed operations, but they require these operations to work in large dimensions. Few envisioned this prospect, and much of today’s silicon photonics is unfit for high-dimensional problems.
What’s more, even now that large machine learning computations are a clear key market, the underlying physics behind silicon photonics are still ruthless and demanding. Small manufacturing imprecisions can have dramatic effects . For instance, transistor-sized Mach-Zehnder interferometers used for matrix multiplication still requires individual tuning — a Sisyphean task.
Rebooting optical computing: matching a market, not just building a technology
Like the IEEE’s rebooting computing initiative, we believe that specialized hardware is still central to meeting the demands of modern computing workloads.
Fifteen years ago, GPUs were little more than a gamers’ niche. They are now ubiquitous in machine learning workflows and have taken over datacenters worldwide (Figure 4). And yet, as data is growing bigger and richer, machine learning is still compute-strapped. Even GPUs are struggling to meet these ever-increasing demands.
Hardware truly specialized in high-dimensional computations is needed; transistor-based approaches will not do. Instead, a carefully designed and machine learning oriented form of optical computing is needed.
At LightOn, we have built our Optical Processing Unit (OPU) to be this new form of optical computing. Our technology has been designed around three founding principles:
- High-dimensional by nature: machine learning applications are hungry for bigger data. Rather than constraining our technology to a plane — like in silicon photonics — we should leverage all three dimensions, by shaping 2D information directly into beams of light.
- Unique in its simplicity: our technology should rely on off-the-shelf technology. It should be robust and allow for rapid iteration. While silicon photonics still struggles with manufacturing, LightOn’s accelerators are already available on the cloud.
- Linear algebra oriented: modern science and engineering are built on linear algebra. Rather than implementing logic rules, our technology should implement specialized building blocks for linear algebra.
In the next installment, we will present the linear algebra building block currently implemented by our OPU: random projections. From their mathematical definition to their use in compressed sensing, and the possibilities they open in machine learning, we will outline why random projections are so important to the future of computing.
Our Summer Series exploring the story behind LightOn:
- 1 — Faith No Moore: Silicon Will Not Scale Indefinitely
- 2 — Optical Computing: a New Hope (this post)
- 3 — How I Learned to Stop Worrying and Love Random Projections
- 4 — Random Projections at the Speed of Light: Full Ahead Mr. Sulu, Maximum Warp
Stay updated on our advancements by subscribing to our newsletter. Liked what you read and eager for more? You can check out our website, as well as our publications. Seeing is believing: you can request an invitation to LightOn Cloud, and take one of our Optical Processing Unit for a spin. Want to be part of the photonics revolution? We are hiring!
: Samuel Weaver et al. Nonlinear Techniques in Optical Synthetic Aperture Radar Image Generation and Target Recognition. Applied Optics, 1995.
: David Miller. Are Optical Transistors the Logical Next Step? Nature Photonics, 2010.
: David Miller. Attojoule Optoelectronics for Low-Energy Information Processing and Communications. Journal of Lightwave Technology, 2017.
: Christopher Doerr et al. Single-Chip Silicon Photonics 100-Gb/s Coherent Transceiver. Optical Fiber Communication Conference, 2014.
: Christopher V. Poulton et al. Coherent Solid-State LiDAR with Silicon Photonic Optical Phased Arrays. Optics Letters, 2017.
: Michael Fang et al. Design of Optical Neural Networks with Component Imprecisions. Optics Express, 2019.
Julien Launay, Machine Learning R&D engineer at LightOn AI Research.