New light-based switches could dramatically improve internet speeds
By Ahad Rauf, BS Electrical Engineering & Computer Sciences ‘20
The world is becoming connected, not by electricity, but by light.
90% of the data in world history has been generated in the past three years alone, and we’re currently producing far more data than we can reasonably consume — enough data for every person in the world to read 66 Bible-length books per day. Much of this data gets stored in data centers, giant warehouses packed with many servers managing billions of data files, and with current techniques we’ve managed to optimize them so someone can request any particular file and hear back within, in many cases, a fifth of asecond.
To achieve these speeds, electricity-based cables like the ones used to connect your computer to the Internet simply aren’t fast enough. Just as the communications industry has replaced electrical cables with light-based fiber optic cables over the past two decades, recently many data centers have replaced electrical cables with optical cables, which offer higher speeds at longer distances and lower energy consumption.
However, Ming Wu, a professor in the Electrical Engineering and Computer Science department at the University of California at Berkeley, believes this setup is still not fast enough. New research by Wu into optical switches, which link multiple fibers together to enable inter-communication, suggests that a new technology called silicon photonics can be used to dramatically improve speed and performance over existing methods.
Fundamentally, optical switches take in an array of cables, anywhere from eight to upwards of three hundred, and allow light to travel from any input cable to any desired output cable. In data centers, they’re used extensively to route light between different clusters of servers across the data center, and the average user request will go through 3–4 different optical switches before it reaches a server that’s free to handle it. The enormous size and complexity of modern data centers make these switches a very important yet time-consuming part of the process, and improvements in optical switch efficiency hold the potential to significantly improve the data center workflow.
User requests travels through multiple levels of optical switches before reaching a server that can handle it. Source: adapted from Bergman, Rumley et. all, OECC 2016 (still need to get permission from them).
Wu uses a technology called silicon photonics for his next-generation optical switches. He describes silicon photonics is “a hot emerging topic, and it takes advantage of all the advances in CMOS,” the technology used to run almost all modern electronic devices, from cell phones and laptops to the largest cars and airplanes. The only difference is that, while most electronic devices use circuit boards that move electrons around to create current, silicon photonics creates circuit boards that move can light around instead.
Silicon photonics allows Wu’s group to produce extremely cheap and reliable switches at high volumes. It also removes many of the traditional problems with small-scale light communication, like low power efficiency and light scattering if two beams cross in midair. As an added bonus, the technology also allows him to add multiple communication pathways stacked on top of each other, letting him pack more information in a smaller volume.
Silicon photonic’s ability to add multiple layers allows Wu to stack communication pathways and switching elements right on top of each other. Source: adapted from Han, Seok et al., Journal of Lightwave Technology 2018 (still need to get permission).
Currently, optical switches in data centers use 3D arrays of mirrors moving in a vacuum. These are highly scalable, even with hundreds of input and output cables to
route between, and can operate with extremely low power loss. However, they’re often extremely expensive, costing hundreds of dollars for every additional input or output cable, and they can also be very complex to control. Competing technologies, meanwhile, often get limited by scalability — more cables mean more connections, and few alternative designs make it past a 64 input-to-64 output cable array.
With his multi-layer silicon photonics model, however, this year Wu published the success of a 240-to-240 cable array, and he’s currently working on testing a
320-to-320 cable array. Moreover, he’s shown that his design requires a tenth of the cost while running at top speeds of up to 10,000 times that of traditional 3D mirror arrays. That means Wu’s technology holds the potential to dramatically improve data center speed and efficiency, which translates to faster and cheaper internet service for us all.
The next steps for Wu’s group will be in further scaling and testing of their silicon photonics design. The current data center standard is for 310-to-310 cable arrays or larger, and as Wu scales his switches that large he also plans to further optimize the chips to integrate smoothly into the data center workflow.
“For data centers, everything has to be cheap, [the] quantity is large, and the switching time demand is more stringent,” Wu says. “Different workflows and different problems need different optimum connectivity.” Wu envisions applying his optical switch design to infrastructures optimized for artificial intelligence and machine learning. These fields are used extensively in companies working on robotics, smart devices, and large software products like Google and Facebook, but they chronically suffer from long program run times and intensive computation demands. As Wu puts it, “if you can make it go from 20 minutes to 10 minutes, then it’s worthwhile to spend a few microseconds doing the switching.”
These advancements and future visions hold great potential for the future of communication and information storage. Global internet traffic is expected to expand fivefold over the next five years and will have increased 127-fold between 2005 and 2021 — today’s 66 Bibles a day will turn into 330 Bibles a day, and we can greatly anticipate all the advancements we can make with so much information at our disposal.
To process all this information, moreover, Wu envisions one day removing the electrical connections entirely and having computers talk to other computers exclusively through light, a goal he describes as the “holy grail” for his group. Professor Ming Wu and his
group are optimistic about the future of integrating silicon photonics into existing data center infrastructures as the first step towards this dream, and hope that one day their designs will be integrated into Google’s data centers to improve the speed and cost efficiency of their services across the planet.
The work engineers do shapes the world around us. But given the technical nature of that work, non-engineers may not always realize the impact and reach of engineering research. In E185: The Art of STEM Communication, students learn about and practice written and verbal communication skills that can bring the world of engineering to a broader audience. They spend the semester researching projects within the College of Engineering, interviewing professors and graduate students, and ultimately writing about and presenting that work for a general audience. This piece is one of the outcomes of the E185 course.
Connect with Ahad Rauf.