Connectivity as a Commodity?
Earlier this year I was tempted. By a lower cost. Not much lower, but low enough to convince me to try it out. After years of being a loyal customer of my mobile carrier, I sided with their competition. What could go wrong? Cellular coverage has evolved for decades, and the carrier I came from was able to provide decent coverage in even the most remote locations my family and I like to visit in the north-western part of Jutland. Why should that be any different for the other carriers?
I thought I had done my due diligence and compared the coverage maps publicly available on the operator’s websites. It was difficult to see any difference. It made sense because here in the twenty-first-century wireless connectivity has become a commodity as essential as the oxygen we breathe, right? I was convinced. My customer needs would be covered regardless of the carrier.
I could not have been more wrong!
When we went to those relatively remote areas of Jutland during the summer holiday, my phone rarely had a data connection decent enough to do any web browsing and social media, and I could forget all about streaming the latest episode of Paw Patrol (requested by the toddler on the backseat). All of which worked flawlessly with my previous carrier.
Then what is my point with all this, besides having an outlet for complaining?
When we talk about connectivity, regardless of whether it is for our handheld devices or IoT devices, there are a plethora of technologies and network operators, where you are easily convinced that it is just a matter of subscribing. Then you are connected and can exchange data. They have you covered, so to speak. But if that was the case, how could I have the experience I did? In the following sections, I will touch upon some of the aspects that affect the experienced coverage and wrap up by recommending what to consider before choosing a connectivity provider for your wireless application.
What defines coverage?
There is no easy answer. In the case of my cellular carrier, it came down to data rate. I was connected to a network at all times, and if I had just needed to make calls, I would not have had any issues. But the experienced coverage did not provide sufficient data rate for the more data-hungry use cases like for example streaming. In the end data rate is just the result of the utilized coding and modulation scheme, the signal strength and other signal parameters.
For IoT applications, for example, small sensor devices pushing data periodically, the data rate is less important. Hence the definition of coverage for these technologies is often simply based on signal strength. To keep it relatively simple, I’ll try to make my point based on signal strength alone.
The signal strength received at the gateway is a result of the transmitting device’s capability of emitting a wireless signal. The RF circuitry and antenna implementation will greatly affect the resulting emitted power. If these parts have not been in focus during product development they may have a negative effect on the emitted power. As a result, the experienced coverage will be worse, because the signal strength reaching the gateway will be less than that of a device with for example a better antenna.
Another factor is the environment and the attenuation and distortion it is causing the signal. There is a great difference between the experienced coverage of outdoor devices like street lamps and water level sensors to indoor devices like fire alarms, flood sensors in the basement or smart meters buried in pits below a cast iron lid. Penetrating such materials and buildings heavily attenuates the signal, and lower signal strength means worse experienced coverage.
Operators of commercial networks, regardless of the technology, often provide a coverage map, like the ones I consulted when deciding on a new mobile carrier. These maps are usually generated using both theoretical propagation models and practical measurements with some sort of reference device. It is of course not feasible to practically measure the coverage for all use cases, all hardware implementations and all environments, so some standard scenario is assumed resulting in the published coverage map. Network operators have an interest in attracting customers, but they are not lying. They are at most very optimistic! Nationwide coverage may be true, but the hardware, location and traffic pattern used in a practical application may not correspond to the assumptions made by the operator.
The number of devices needing wireless connectivity is continually increasing, which adds load to the wireless networks. A base station can usually serve up to a certain number of devices. For devices located at the edge of coverage, most protocols will improve reception probability by reducing the data rate and with that increase the air time. When the available network resources are divided among the many connecting devices, it may be difficult for edge devices to be prioritised in a cellular network. In the simpler IoT protocols, where the network access is not carefully orchestrated by the gateway, as in a cellular network, the transmissions from edge devices may not be received at all in loaded networks.
A network operator can choose to add more gateways, and that increase the available network resources to be shared among the users. Gateways are subject to installation costs and annual operational costs, like for example rental fees for being installed in a mast. Adding more gateways, therefore, risk compromising the business case of the operator. If your application only represents a minor fraction of the total devices using the network, and thus of the business case, it may be an uphill battle to convince the operator to improve coverage.
Connectivity is not a commodity, not yet at least. The experienced coverage depends highly on the use case and the hardware utilised by the application. What is sufficient coverage to one application may be spotty and utterly useless to another. In the end, it all comes down to what your application needs from the connectivity. If you have strict requirements to either lifetime or reliability you may only have a few real options among the plethora of technologies, and you may still have to compromise one or more of your requirements.
You need to identify the true purpose of the data your application will be exchanging, and what requirements it gives to the connectivity?
Sure, there are some sunshine examples where connectivity is a commodity, where it does not matter one bit whether you choose one technology over the other, or one operator over the other, and where you can have the perk of making a choice based on cost alone. In most situations, however, you have to include other KPIs in your decision process to make the choice that is best for your application.
If you are wondering how it ended with my cellular carrier. My old carrier called me after the summer holiday, saying they were sorry to see me leave, and due to my long-standing history with them they could make me a special offer if I would consider returning to them. I faked a long pause for thought and accepted immediately thereafter.
Yes, I now pay a little more, again. But I prefer to know I can count on coverage regardless of the environment I use my device in.