Edyza Wireless For High-Density Indoor Environment Monitoring

Rana Basheer
EdyzaIoT
Published in
11 min readOct 23, 2020

Edyza wireless was designed for monitoring the environment in an indoor grow room. Typical parameters that we track in a grow room include air temperature, humidity, barometric pressure, CO₂ concentration, light intensity, soil moisture, soil salinity, soil temperature, etc. These are slow-varying values in that they do not change by more than 1% in a minute. To continuously track their progression over time, we collect these parameters once every 30 seconds. Most of our deployments range from anywhere between 40 to 200 wireless sensors in a grow. Common applications are identifying airflow pathways, mapping spatial variation of the indoor environment, machine prognostics to detect time-to-failure for HVAC, dehumidifiers, etc. In short, our requirement is for a wireless protocol that can operate for multiple years from a battery (ultra-low energy consumption), can communicate reliably under scenarios involving a lot of wireless devices operating in close quarters (high-density), and finally, take advantage of the fact that we are collecting time-series data that changes slowly with time and space(temporal and spatial correlation of low throughput data).

This article will explain the design choices that went into building this wireless protocol that interconnects our sensors.

Low-power, high-density, and low-throughput requirements gave us the flexibility to revert to an earlier radio hardware architecture that is extremely energy efficient but has gone out of popularity in this new era of high throughput WiFi and Bluetooth networks. However, this radio architecture is still widely used in one area where it operates from energy leached out of an incoming signal, deployed in very high density, and has minimal data to send, namely RFID (Radio Frequency Identification) tags shown below.

RFID Tags in all shapes and forms

Like an RFID tag, our radio signals are FSK (Frequency Shift Keying) modulated. Under FSK modulation, a wireless device transmits binary data of ones and zeros as two distinct frequencies, as shown below.

GFSK modulated signal

However, instead of being a passive device like an RFID tag (RFID tags have no internal battery and leach energy from an incoming radio beam provided by a Tag Reader), Edyza wireless operates from an internal battery. Consequently, our network has the advantage of consuming extremely low energy like an RFID tag but with the added flexibility of being always available. Before I explain more about our radio architecture, let me digress into a topic that I always get called on.

Why did I choose the crowded 2.4GHz?

Often I get this question in the context of Sub-GHz radios because many sales reps have taught our clients that 2.4GHz is crowded, which it is, and sub-GHz is the solution for all their wireless connectivity woes, which it is not.

Three factors that helped me settle on 2.4GHz are:

  1. License-free universal availability
  2. Completely-of-the-shelf (COTS) radio hardware
  3. Smallest antenna footprint required for efficient radio communication
ISM Bands and their availability from Wiki

Having the ability to operate without a license is critical to keep the cost of our high-density wireless solution low for our customers. License-free ISM (Industrial, Scientific, and Medical) bands have no costly country-by-country regulatory hoops to satisfy, nor any annual operating tariffs to be paid and so on. The above table lists all the ISM bands currently available, and the following: 13.56MHz, 27.12MHz, 40.68MHz, 2.4GHz, 5.8GHz, 24.125GHz are universally license-free to operate. Of these frequencies, in a “chicken or the egg” type of situation, 2.4 GHz ended up being very popular and consequently overcrowded with several major semiconductor manufacturers offering extremely energy-efficient price-competitive radio hardware. Finally, having a smaller antenna footprint will allow an enclosure to fully encompass it and protect it from external metallic objects that would adversely affect radio performance.

Antennas are the bane of any industrial design engineer. There was one that almost killed the iPhone.

Antenna Size, Operating Frequency, and Enclosure Design

The antenna size has an inverse relationship with the operating frequency. A sub-GHz radio (a radio operating at frequency < 1GHz) needs a larger antenna than a 2.4GHz radio to transmit data efficiently. Additionally, every antenna requires a protected guard space around it, free of metallic objects, to ensure it operates as designed. An industrial design engineer is presented with a choice to either make a larger enclosure containing the antenna entirely within it or place it externally, as shown in the Monnit solution below.

Monnit MNS2–9-W1-VM-500 operating at 900MHz

An internally contained antenna will make the enclosure's overall size larger, making it harder to mount. In contrast, an external antenna will result in a smaller enclosure and consequently easier to mount. However, external antennas are the primary causes of bad communication links, especially in an indoor grow like the one below. An external antenna close to benches, trellis poles, and other heavy pieces of equipment can result in unpredictable RF failures. Additionally, external antennas can accidentally snag on people/equipment or could get doused with water during regular operation.

Indoor Grow with a lot of metallic parts that could be potential antenna “kryptonite.”

In comparison, below is an Edyza’s wireless node with a fully enclosed PCB substrate antenna, thereby ensuring that our antenna always has a guard band and will never come in direct contact with water, and always operates at 100% of its designed specification.

EZ-SEV101 with PCB substrate antenna at 2.4GHz in an 8cmx5cmx3cm enclosure

Finally, on the choice of license-free frequency bands that I presented earlier, 5GHz and 24GHz frequency bands will result in even smaller and more efficient antennas than 2.4 GHz. However, at this time, there is minimal interest from hardware manufacturers to offer off-the-shelf radio hardware in these bands. Any custom solution will be cost-prohibitive for our clients, thereby making it impossible for us to offer affordable lab-grade wireless sensors in high-density for indoor grows.

There are some rumblings of a bright future ahead for these frequencies. Apple’s recently announced foray into UWB (Ultra Wide Band) radios operating at these higher frequencies in their latest U1 Chip will hopefully be a clarion call to the semiconductor manufacturers to offer affordable hardware for us plebs. Now, back to our GFSK radio design in Edyza Wireless.

High Density and Radio Communication

On a basic level, multiple radio devices communicating next to each other is similar to an overcrowded bar full of people shouting at the top of their lungs in an attempt to converse with their buddies. Like humans, radio devices also have trouble understanding when multiple conversations are happening at the same time. In radio parlance, this is called radio interference.

WiFi and GPS devices handle radio interference with a technique called Code Division Multiple Access (CDMA). In CDMA, each transmitter/receiver pair talks a specific code that is mathematically guaranteed not to be mistaken for another transmit/receiver pair’s code. In the crowded bar analogy I presented earlier, this is similar to you talking to your buddy in a separate language from the crowd. Try this out, and you will be surprised how easily you can carry on with a conversation in the noisiest bar when you switch to your native tongue (for me, Malayalam) while the crowd is chattering away in English. However, scaling this will be harder when you have so many cross-conversations within earshot from you. This technique is limited by how many distinct languages you and your buddies are proficient in.

Bluetooth cannot speak multiple codes, i.e., it is mono-lingual. Bluetooth solves this cross-talk problem by continuously adjusting the pitch (frequency) of communication. The communication pitch changes rapidly in an elaborate pre-negotiated synchronized dance sequence between the parties involved in the conversation called Frequency Hopping Spread Spectrum (FHSS) communication. Unfortunately, there is no simple “human in a bar” analogy to explain FHSS. Perhaps the closest biologically inspired analogy would be the intricate song sequence birds have developed to mate or defend territory in the wild. Look at the impressive set of frequencies Canary tweets per second, as shown below.

Frequency changes from Canary Birds

Under FHSS, the number of parallel conversations is limited by the number of unique pitch sequences Bluetooth hardware can pick from approximately 80 odd channels at its disposal.

The above techniques used by WiFi, GPS, and Bluetooth fall under a broad RF interference avoidance method called Spread Spectrum Modulation. Spread spectrum technique is the primary reason for the proliferation of cellphones, WiFi, and other personal communication devices post-1980s. Before that, the world was full of narrowband radios, where everyone has to be polite and courteous, waiting for their turn to speak on the shared communication channel. The old Citizen Band (CB) radio is an example of this narrowband communication that anachronistically lingers to this date. However, spread spectrum radios came at a cost.

Edyza’s Radio Philosophy — Old and New, here the twain shall meet.

While having revolutionized interpersonal communication, the Spread spectrum comes at the cost of increased hardware complexity and a higher energy budget. Consequently, old narrowband radio architecture remains popular for energy-constrained devices such as RFID tags. Edyza’s radio philosophy for low-power communication is to bring the energy efficiency of RFID radios and then handle the radio interference problem arising from high-density by

  1. Transmit power throttling
  2. Time Division Multiple Access (TDMA) communication
  3. A high precision local clock

Transmit power throttling ensures that when a transmitter is sending data, it will talk just loud enough for its neighbor to hear and not anymore. TDMA ensures that transmitters are allotted a tiny slot of time that does not overlap with any other transmitter within earshot (communication range) of each other. Finally, having an extremely precise clock on every wireless node ensures that there is very little drift in time between any pair of wireless nodes, thereby allowing us to wake up a device from a deep sleep and be precisely time-synchronized with a transmitter.

However, there ain’t no such thing as a free lunch. All these energy improvements came at a cost. Now I will explain how we overcame these limitations.

Limited Range

Power throttling limits the communication range of a single wireless node. Our maximum wireless communication range under the ideal line of sight condition is 30ft. To complicate the matter further, plants, benches, and heavy equipment can significantly reduce the wireless range. Consequently, a single wireless node will not cover the length and breadth of an indoor grow. Therefore, to collect data from a large indoor grow, we developed a collaborative multi-hop wireless architecture where data passes through several intermediary nodes before reaching the destination (gateway). This multi-hop architecture is commonly associated with mesh networks. But, mesh networks have some inherent limitations arising from excessive power consumption during path discovery. Our wireless network is a tree structure where the network structure defines the path a data packet needs to take to route it from the source to the final destination: gateway.

If you are interested in learning more about building a fault-tolerant tree network for structural health monitoring, I encourage you to read my 2003 article in the International Society of Optics and Photonics.

Multi-hop Sensor Network

Increased Latency

Time synchronized communication ensures that any device that is not interested in this conversation will be in extremely energy-saving (deep sleep) mode. In contrast, communicating devices will be fully awake and actively exchanging data. For any two wireless devices to correctly exchange data, the transmitter’s and receiver’s clock signals must be precisely time-aligned. In RF engineering parlance, this is called phase lock. Depending on the data rate, phase lock requirements of better than 1 microseconds are typical.

Phase Difference Between Signals

For TDMA to work effectively, each wireless node needs to know when they are allowed to talk. That means a cluster of wireless nodes in a certain location will have to all agree upon when each one of them is allowed to talk. For a larger network, our largest network has 152 nodes in 2000 ft2, there will be multiple clusters with bridge nodes that interlink clusters. Under these conditions, the negotiation protocol must be sophisticated to consider different slices of time allotted by multiple clusters to a single bridge node interlinking them. These negotiations can increase the latency of data making it slower to transfer information through our network. However, the slow varying nature of environmental data cuts us enough slack to not worry about this.

Sharing a common definition of time and, besides, keep time ticks precise is not a trivial job. This problem is exacerbated for our wireless devices since they are energy and computationally challenged. If these wireless devices' internal clock drifts significantly, a device can inadvertently end up talking at a slot that was allotted to another device.

Clock Drift & Quartz Crystals

In wireless hardware, high precision timing signals are generated using quartz crystals.

Quartz Crystal Generating Timing Signals

The application often dictates the accuracy of the crystal, and extremely precise crystals can get very expensive. E.g., crystals used in typical Bluetooth modules are cheaper. They have accuracy in the range of ±20ppm (part-per-million), whereas crystals used in GPS devices are extremely precise, and they fall in the sub 1ppm accuracy range. A 20ppm accuracy crystal drifts 20 microseconds in a second. So in a TDMA communication setup with a cheaper crystal, if a node wakes up to receive data from a transmitter after 1 second, its internal clock would have drifted by 20 microseconds. In other words, this node woke up 20 microseconds after the transmitter started sending data. These two devices are no longer phase-locked, and the receiver is now getting out of sequence garbled data. Receivers handle this clock drift by waking up earlier than they are supposed to and then waiting for the transmitted data. However, any extra time that they are waiting for data is wasted energy. Edyza wireless devices employ crystals rated at 0.5 ppm or better. This is the same level of precision employed by high-end GPS products that allow our devices to remain synchronized even when they are in a deep sleep for multiple seconds. Our longest continuously operating wireless network is functional for a little over 1.5 years, collecting 11 environmental data points every 30 seconds.

--

--