A physical layer comparison of cellular evolution with WiFi over the last three decades.
Cellular and WiFi — It Takes Two to Tango
There are a lot of similarities between the cellular and WiFi evolution, especially for the physical layer. Today, both are ubiquitous and indispensable: a rare case of two competing technologies carving a niche for themselves and co-existing. Let us take a look at their interesting history!
The cellular story started in the early 90s when the Global System for Mobile Communications (GSM) became the standard for wireless communication. Before GSM, there were a handful of different analogue communication technologies in use across the world, but they were regional and limited to a select set of countries. In the 1980s, Europe got its countries and industries together to develop a standard for mobile communications. The infrastructure required has been in development for several years before the first GSM call was made in 1991. From there, the technology quickly took off and by end of the 90s, there were hundreds of millions using GSM.
When I grew up in the 80s in India, communication between people happened mostly through handwritten letters, and telegrams were to quickly inform people of an emergency. Towards the end of 80s, I saw telegrams replaced by “wired” telephones with “subscriber trunk dialing”, where people call the telephone exchange and ask the operator to make the connection to the number they want to speak to. The telephones were expensive and were still used only for emergencies. We had one telephone per street, and usually someone would come running to our house to inform us that we have a call. By late 90s, most of the middle class households had a telephone.
Technology usually takes time to percolate down to everyone. But wireless communication has reached people faster than, say, computers. Commercial computers were available in mid 1970s, but the first computer I saw was in 1995, when my school purchased one. That is two full decades and it took even more time to become a common sight in households. Compare that to everyone I knew in India having a mobile phone by mid 2000, in just a little over a decade of commercial mobile communications, half the time as what it took for computers. The reason for this partly is opening up of Indian economy in 1991, but it is not hard to believe that communication between people matters to the populace far more than personal computers. But enough of me reminiscing and opining. You came here for a comparison between cellular and wifi evolution, so let us dive right into that.
Here is the trailer!
The Beginning: GSM and 802.11b
GSM was the first commercially successful digital mobile communication technology, where we generate “data bits” and pass it through a channel encoder to generate “coded bits” which are then mapped to “symbols” digitally before converted to analog using a digital to analog (DAC) converter, then up converted into radio frequency (900 or 1800 MHz) and transmitted over air. GSM is a single carrier system with a bandwidth of 270 KHz using time division duplex (TDD) to utilize the same frequency channel for downlink (transmission from base station to user equipment) and uplink (transmission from user equipment to base station). It opted for convolutional encoding as the forward error correction scheme and Gaussian minimum shift keying (GMSK) for digital modulation.
GMSK can be viewed as a form of “frequency shift keying”, where bit 0 is mapped to one frequency and bit 1 is mapped to another frequency. But GMSK can also be implemented using a quadrature modulator (with both sine and cosine modulators for converting baseband signal to RF) because of its connection to offset-QPSK, where the I and Q are overlapped so that at any given time, there is phase transition only in I or Q.
This being a single carrier system, the receiver required a time domain equalizer to compensate for the multipath channel impairments. Advanced receivers used maximum likelihood sequence estimation (MLSE) that is based on the Viterbi algorithm.
The initial usage was for voice only with a circuit switched network — which means a connection between two devices needs to be established before information is exchanged between them and this connection is held until the call is disconnected. This requires blocking a part of the switch capacity for each connection. By late 90s, GPRS (General Packet Radio Service) was added for data transfer and kick started evolution of core network to be “packet” based. I must stop short of discussing the “evolved packet core” and restrict myself to my area of expertise, so let us get back to the physical layer.
Late 90s, GSM is well established across the world for mobile voice communication, but the hunger of data is growing. Internet is becoming popular and access to that enormous collection of information was starting to change the world. No wonder people wanted wireless access to it so as to not be tied to the DSL (digital subscriber line) modem with a wired connection. Laptops were getting into the market around this time too, offering mobility compared to desktop computers, and if computers can be carried around, why not internet? The time was ripe for wireless access to the internet, and enter IEEE 802.11 standard — Wireless Local Area Network.
The first of the 802.11 set of standards, now grouped under 802.11b, jumped the gun on GSM and adopted spread spectrum as the technology for communication, allowing it to use a massive 22 MHz bandwidth to transmit data rates up to 2 Mbps. Compare this to the measly 230 Kbps offered by GPRS, and it is a 10 fold increase!
Like GSM, 802.11b is single carrier modulation and the receiver uses time domain equalization for symbol recovery. By 1999, 802.11b amendment introduced some enhancements that increased the data rate to 11 Mbps!
The Buildup: WCDMA and 802.11a/g
It takes two to Tango, and now that WLAN has led the way by adopting spread spectrum, cellular must follow! Andrew J. Viterbi, the inventor of the Viterbi algorithm and co-founder of Qualcomm, made significant contributions in bringing Code Division Multiple Access (CDMA), which was based on direct sequence spread spectrum, to reality in cellular. There were some competing implementations during this process — CDMA 2000 was adopted by the USA as an answer to GSM in Europe, but was quickly replaced by Wideband CDMA (WCDMA) in early 2000s. WCDMA is popularly known as 3G, and the core-network too evolved at this point to support packet based architectures for handling the IP (internet protocol) traffic.
WCDMA, the physical layer technology adopted by UMTS (Universal Mobile Telecommunications System) included support for Frequency Division Duplex (FDD) access, which means a pair of channels, one for downlink and one for uplink, can simultaneously transmit and receive signals on air. Until now, both GSM and WLAN has been opting for TDD, but introduction of FDD in cellular removed the inconvenience of devices having to wait for reception while delaying the data to be transmitted. This is not the only one-upmanship, but WCDMA also introduced the near-capacity achieving Turbo codes as the error correction scheme — one of the best inventions channel coding research had on offer at the time. Turbo codes are based on parallel concatenated convolutional codes, and soft-output Viterbi algorithm is used as component decoder in iterative decoding to get very close to Shannon capacity limit. The game is afoot!
But oh my, WLAN changed the game again! Introduced multi-carrier with OFDM (Orthogonal Frequency Division Multiplexing) in the early 2000 and supported peak data rates of 54 Mbps! It specified operation in both the 2.4 GHz (802.11g) and 5 GHz (802.11a) unlicensed bands.
WCDMA, which was based on 5 MHz bandwidth, introduced HSPA (High Speed Packet Access) with higher order modulation (up to 64 QAM), multiple antennas (up to 4x4 MIMO) to get to peak data rates of 42.2 Mbps. Still a bit shy of the 54 Mbps that WLAN can provide, but the difference is mobility. Cellular is designed for outdoor connectivity, focusing on maintaining connection over very long distances and at high speeds, where as WLAN is meant for indoor connectivity and low mobility environments.
Not satisfied, cellular tried playing the multi-carrier game with WCDMA, introducing carrier aggregation with first DC-HSPA (dual carrier) and later MC-HSPA (multi carrier), where two or more carriers/channels can be aggregated into one data pipe, offering a way to increase bandwidth and thus the data rate.
3GPP (the 3rd Generation Partnership Project), a global consortium of industries and the standardizing body for cellular technology, by now realized that OFDM is instead an elegant way of doing multi-carrier than the complex carrier aggregation on WCDMA. It is a Tango after all, and where when one partner leads, the other must follow!
But hold on though, the carrier aggregation is not going away and will make a surprise entry again.
The Evolution: LTE and 802.11n/ac/ax
Move over WCDMA, the next gen is here! Enter LTE (Long Term Evolution) in 2008, adopting OFDM as the physical layer multiple access technology. Now cellular can go toe to toe with WLAN.
But wait, cellular has led with MIMO and Turbo codes. It is time for WLAN to catch up with a follow! WLAN introduced 802.11n in 2009, with up to 4x4 MIMO and the capacity achieving Low-Density-Parity-Check (LDPC) codes for error correction. WLAN also increased the bandwidth to be not just 20 MHz as it was for 802.11a/g, but also 40 MHz, resulting in a whopping peak data rate of 600 Mbps. Beat that cellular!
In turn, or in return, LTE is going to make not just one, but a whole sequence of moves!
The first LTE release with bandwidth options 1.4, 3, 5, 10, 15 and 20 MHz and 2x2 MIMO provided a peak data rate of 150 Mbps in downlink and 50 Mbps uplink. Then came the support for 4x4 MIMO before upping the game with LTE-Advanced (LTE-A).
Carrier aggregation made a comeback in LTE-A as the option to increase bandwidth and up to 5 component carriers can be aggregated. This coupled with support for up to 8 layer spatial multiplexing (8x8 MIMO) allows for theoretical peak data rates of 3 Gbps in downlink.
Between 2011 to 2017, in different releases, LTE-A also introduced an array of options encompassing an umbrella of use cases:
- Inter Cell Interference Coordination (ICIC) to mitigate interference from neighboring base stations. The base stations talk to each other and decide the schedule for Almost Blank Subframes (ABS) so that when one base station is transmitting, the interfering base station transmits ABS to minimize the interference. Further enhancements also provide an option for the base station to not use the subcarriers that neighboring base station uses for CRS (Cell-specific Reference Signal) to transmit the control and data channel. Advanced user equipment can also estimate the channel from neighboring base station’s CRS to do interference cancellation for those subcarriers.
- Coordinated Multi Point (CoMP) transmission to improve throughput at cell edge, either via dynamic point selection (either one of the two base stations can transmit to the UE depending on channel quality measurement from UE) or via joint transmission using appropriate beamforming weights.
- Discontinuous Reception (DRX) to schedule the user equipment (UE) to go to sleep and save power. Transmission to the UE are sent only during the period UE is awake.
- Narrow Band Internet of Things (NB-IoT) provides for the IoT use cases where the device requires very low throughput. This is achieved by using a bandwidth of one resource block unit that is 180 KHz, either from the guard band or from within the bandwidth.
- Device to Device (D2D) communications allows for communication between two UE devices by introducing “sidelink” channels with its own primary and secondary synchronization signals. This is easy to do in WLAN where there is no difference between downlink and uplink transmissions, but LTE uses OFDMA in downlink and SC-FDMA (single carrier frequency division multiple access, which is DFT precoded OFDM) in uplink. Since UE receivers are designed to only receive downlink transmission, entirely new channels are introduced for D2D support.
- V2X (vehicle to everything) to support communication during high speed mobility between autonomous cars.
Not all of these options are implemented, deployed and commercially available, but the support is present in the standard.
To catch up during all these moves, WLAN introduced 802.11ac with support for up to 8x8 MIMO and bandwidth options of 20, 40, 80 and 160 MHz, operating in both 2.4 GHz and 5 GHz bands. With 160 MHz bandwidth and 8 layer spatial multiplexing, peak data rates can theoretically reach 3.4 Gbps. The physical layer catches up with LTE-A, but the WLAN MAC is still encumbered by CSMA-CA (Carrier Sense Multiple Access and Collision Avoidance) resulting in all the scheduling benefits in OFDMA is being missed out by WLAN. Realizing that later in the game, IEEE standardized 802.11ax in 2019, bringing the goodness of OFDMA to AP (access point) scheduling, and introducing TWT (Target Wake Time) to provide a similar power saving option like DRX. The subcarrier spacing was also reduced (from 312.5 KHz to 78.125 KHz) in 802.11ax to allow for better resource allocation across users, and this enabled peak data rates up to 9.6 Gbps when using 8x8 MIMO and 160 MHz bandwidth. This is no longer orders of magnitude higher than cellular as it once was, but still a reasonable 3x larger.
But by the time IEEE standardized 802.11ax, 3GPP had already begun work on the next step in the evolution — the 5G New Radio (5GNR). The dance is reaching a crescendo!
The Future: 5GNR and 802.11be
Cellular continues the lead, but must also copy some of the moves from WLAN to stay in sync. After all, it is still Tango!
The new radio (NR) adopts LDPC codes as the error correction scheme, taking a leaf out of the WLAN dance book. But it also leads by adopting brand new state-of-the-art Polar codes as error correction for control channels, finally moving away from convolutional codes completely.
But that is not all, 5GNR also introduces scalable numerology by allowing for several subcarrier spacing options, the 15 KHz used in LTE, and multiples of it: 30, 60 and 120 KHz. This scalability enables NR to be used on any frequency band from sub 1 GHz to 100 GHz. The lower frequencies (sub 6 GHz) will continue to use 15 KHz subcarrier spacing to provide for long range, but higher frequency bands (26 GHz and 39 GHz) can use larger subcarrier spacing to increase the bandwidth up to 400 MHz per carrier, while at the same time shortening the TTI (Time to Transmit Interval) due to smaller OFDM symbol size. NR also ups the game by introducing flexible scheduling of time frequency resources by allowing transmission to start at any symbol instead of the subframe boundaries in LTE.
But the biggest of all is massive MIMO! WLAN always had the edge in beamforming since the channel feedback in WLAN sends the full precoding matrix computed by the receiver, and this precoding matrix can be used by the other transmitter to realize the maximum beamforming gain. The LTE beamforming gain however was limited by the finite set of precoding matrices from which the receiver must select the best one. With massive MIMO, NR can use hybrid beamforming techniques to form a pencil beam in the UE directions. Beam scanning followed by UE feedback is used to find the direction of the UE. These are still early days of NR, and the technology is still evolving. But there is enormous promise here!
Similarly, WLAN is continuing to work on the future, christened 802.11be (Extreme High Throughput) and some of the initial directions seems to be taking inspirations from LTE Co-MP (multi-link) and soft combining with HARQ (Hybrid Automatic Repeat Requests). We will have to wait and see what the future holds, but the dance goes on!
So who is the winner? It is the communications engineers! Delighted with the academic research coming into practice and happy to build the systems required for it. During the two decades of my association with Wireless Communications, there were a few algorithms/techniques that I got excited about, but they never made it to practice: algebraic geometry codes with list decoding, iterative modulation and decoding, interference cancellation to achieve single channel full duplex operation, are some of them. But the technology adoption is speeding up and academic research is no longer that much ahead of the curve. Polar codes invented in 2008 has been adopted in 5GNR and that is research coming to practice within a decade. With this trend speeding up, I can get my hands on the recent advances in Communications theory without being a researcher in academia, but as an engineer building products and seeing the theory practically in action. What can be more awesome than that!