WiFi coverage planning

Rob H
Altiwi Blog
Published in
12 min readSep 7, 2020

The fact that we are doing our own WiFi does not mean that we are not in the field and that we are not solving other than our own issues. So following article is not to be understood as “looking for excuses on why their own system does not perform well” :-) The technology in concern comes from a Cisco — Motorola/Symbol combo. And I do not think anything wrong about them, to be clear, contrary, it could be considered as an etalon of what can be achieved in the wireless field. And the technologies/vendors involved are meant only as a framing of the underlying issue which is:

Unrealistic expectations when planning wireless coverage

The expectation might be rephrased as “To achieve a good WiFi performance and coverage, I need a big fat strong access point. In ideal case even stronger than it is allowed in my country, I don’t care of the 100 mW regulatory limit, nobody would ever find that I am smashing the ether with 400 mW and I will have a better signal”.

False. And again false.

There is only one true part in this: The signal strength will increase.

Will the perfomance improve? Nope.

So what affects WiFi performance?

This question should be probably divided into two separate (although dependent) parts.

First — What affects data rates that can be achieved over WiFi?

Second — What affects the area coverage (with acceptable data rates)?

What affects data rates that can be achieved over WiFi?

Let’s examine the first question. Data rate is the amount of information transfered over a unit of time. We will not discuss “what is the amount of information”, just agree upon that usually we measure the data rates in bits per second (rather Megabits per second, Mbps, in fact), although the users often percieve it more like Megabytes per second, or 4k video from Youtube tears or doesn’t tear. Often the magic words are 54Mbps, 108Mbps, 450Mbps 1300Mbps. Looks familiar? Those are the theoretical maximum data rates for different WiFi standards written in big on the product boxes. And those are the numbers we, poor IT guys, fight with. Anytime the data rates written on the box are not met, it is our fault.

Regradles the actual maximum throughput (data rate) is in any particular case, it means the technical limit of tranfersing data under ideal circumstances. Those are, of course, the signal strength, SNR, interference from other nearby trasmitters (other WiFi devices, microwave ovens, any other radio that transmits in our band), interference from our very own reflected transmission to name a few. And those parameters are even further affected with other environmental factors like humidity, number of people in the area etc.

As SNR (signal to noise ratio) itself is mostly influenced by other interference (and of course the signal strength), so we could loop in a circle forever, let’s consider an ideal scenario first, where there are no other interfering factors and the signal strength is sufficient to achieve the maximum rate.

Will I, under such circumstances, transfer the data as fast as maximum theroteical throughput? Again, no, even though the loss might not be that significant. There are following factors at play:

  1. Any wirelless transmission uses the air and part of electromagnetic spectrum as a SHARED medium. In such case (and we will not discuss the mechanism, how it is achieved) only ONE device can transmit at a time without interference.
  2. The overhead of the wireless protocol itself, along with the underlying other protocols (TCP/IP, used applications) will decrease the actual tranfer rates.
  3. What will affect the transfer rates the most is the number of devices connected to the network — each device needs its time from time to time even when it does not actively transfer any usefull data, just to keep alive.

At the end, in ideal conditions, the transfer rate will be affected mostly by the “airtime” for transmissions directly related with our intended transfer. In other words — the time we’ve been alotted to send our usefull data, which will always be some fraction of the timespan we measure.

Why is this so important?

Up to now I’ve been intentionally quite general, but let’s dive a bit under the WiFi hood. The conditions will never be ideal. The signal will vary, the SNR will vary, the traffic pattern will vary. So the WiFi has to be able to cope with changing conditions and more, it needs to cope with the older standards and rates to older devices. We will discuss the signals later, but for now take as granted that (quite intuitively) to achieve certain data rate, some minimal conditions (signal strength, SNR) must be met to make the tranfer even possible. So one of the first tasks the engineers faced was to handle (for example) the scenario, when my mobile device gets further from the other device (say Access point). When that happens, the signal strength will decrese and the SNR will decrease up to the point the transfer with given rate is impossible. The obvious solution to this problem is to fall back to lower speed (often to use older standard that allows that speed), theoretically one can drop from say 108Mbps at 802.11n down to 1Mbps at 802.11b and then — further communication is not possible — you are out of range. To be able to achieve this, each SSID (WiFi name that you see in the Wireless network list of your computer) must transmit so called beacon every 100ms (miliseconds) at any data rate it supports. Although the beacon is a short packet (say 140 bytes in average), the airtime (see above) at 1 and 2 Mbps rates is huge so if you have a larger network with 4–5 SSID your beacon overhead can easily eat up as much as 30%-40% of your wireless capacity(!).

So solve that problem here is a first takeaway from this article: Disable the slower data rates or even the whole 802.11b standard and use only the newer and faster rates. It is rare to see a device that is not capable of at least 802.11g and todays modern devices can handle higher rates at similar signal conditions. And if not, it is always easier to add an access point.

It might look that the signal strength and SNR can be easily improved by increasing the output power of your access points but there are some caveats. The most important one probably is, that in case you have more than one access point, increasing the output power increases not only the overal signal strength (even not that significantly as it might seem, see further) but also dramaticaly decreases SNR and thus the maximum achievable data rates.

Spoiler alert: To achieve better data rates, we would need to DECREASE output power!

What affects the area coverage (with acceptable data rates)?

So here is the second factor of our puzzle — the area coverage. The more power you will put into your transmitter, the furter the signal will reach. Or more preciselly, the more power your device radiates, the stronger signal levels at any given distance you will get. Quite intuitivelly the signal level will decrease as the distance from the transmitter increases. And from some point (usually around -70dBm with SNR less than 25dBm) there is no reasonable communication possible. At least when talking about WiFi, with radio amateur transceiver this signal level would probably tear the headphones off your head :-)

WiFi Signal application on my Mac. There are plenty of utilities for any OS that will show you the real numbers. Sometimes it is necessary to switch to dBm in preferences to avoid percantage readings, wich are quite useless.

Notes: For the sake of simplicity, do not bother that much with the dBm units. Consider them as just a numbers, that you can directly measure on your WiFi card, if you dig deep enough. Generally, the bigger the number tied to signal level the better. And obviously (the signal levels you can get at your receiver are quite low in negative area, the -61 dBm is much better signal than -70 dBm as dBm is logarithmical scale). It is good to know, that it is almost impossible to get better signal than say -30dBm, even if you are right next to the transmitter and the transmitter is radiating the full power, which is usually 20dBm (100 mW). The air between the antennas has enormous attenuation to the signal. So quite usually we will be quite happy in some -55 to -65 dBm range.

Another important note regards the other “units” like signal level is 90%, or the icon shows four strokes of five. Those are completely useless and are intended to general public to give them some information they can grasp. 90% of what? So if you would not measure the signal level in dBm somehow, the complaints about percentages are just an indication.

What is important to know is, that the signal level from constant power source is inversely proportional to the square of the distance. In plain english — if you increase the output power of your transmitter twice, you will get maybe the a quarter more reach. So as the gains you will get are not that bright in comparison with the odds (worsing the SNR ratio, for example), increasing the output power of your access point ususally does not proove as a good strategy to improve your wireless network performance.

And there is another good reason why to avoid that. Probably the most important one. Usually, the devices connected to the wireless network are battery powered by design (think of mobile phones, tablets, laptops). The energy in their batteries is the most precious asset, so the designers usually do not let the devices radiate the full power that is allowed by regulation. Don’t forget, more power=more battery drain. In other words, the fact that you are allowed to transmit at 20 dBm (100 mW) does not mean that you will. Of course, the WiFi access point can transmit at 20 dBm, in the end, it is a wall powered device, but the mobile devices (the REASON you build the wireless network) can’t.

Following slide has been captured from “RF Design for the Mobile Devices explosion” by Alexey Zaytsev, from Cisco. Shows the typical output power of Apple mobile phones.

For example, maximum output power of most mobile phones is set (based on model and band) to range from 12–17 dBm (16–50 mW). And this is something you cannot affect or even set (not telling that even if it would be possible, users would rarely, if ever, done it). You, as the WiFi designer can usually affect your network, not the client devices.

And now comes the most exciting part — even if we took the best case scenario and expect that all the devices are capable of transmitting at 17 dBm (50 mW), there is no way for the access point to receive their signal, if they would get out of range of their transmit power. So we can easily get into the situation when the signal strength and SNR level measured on our client device is sufficient enough, but the acual performace is poor and we cannot see any apparent reason! Remember — the communication has to be two-way, WiFi is not an FM radio, it SENDS and RECEIVES data.

Second takeway: If you want to achieve a good coverage, you need to DECREASE the output power of your access points at least to the maximum output power of your devices. If you do not know it, I’d probably stay safe at 16 dBm, which is by the way only 40 mW(!).

This observation has following consequence — if your premises are large enough, you’d need more access points. And the reason for it is not that the WiFi manufacturers are greedy and want to sell more access points. The reason is pure physics. And as always, there is no free lunch, so if someone tries to tell you that he is selling a Strong WiFi router(TM) — I’m talking about you, O2 — which will get the signal all around your house, be suspicious!

Just another note: Everything above is true when talking about omnidirectional anetennas (dipoles). When comes to other types of antennas with some gain, situation starts to be a bit complicated, at least for this, already long enough, article. So I will let it for maybe another one with just one warning — the gain is not free, it is always on the expense of shape, so what you gain in one direction, you will loose in the other. Which is not always necessarily a bad thing but you should be aware of it.

Returning to the output powers of client devices — any decent WiFi system (Altiwi is not an exception) is able to show you the clients it receives along with the received signal level. If you measure the signal strength on the device on a fixed spot, the signal level of that device on the access point side should be roughly comparable. For example it is OK to measure the AP’s signal strength on the mobile device as, let’s say -59 dBm and see that very device on the access point client list as maybe -61 dBm. But it is not OK, if you see that device on the same AP with -70 dBm. In such case, there is most probably the disproportion between the AP’s output power and the maximum power the mobile device is capable of. Warning — the output powers of most mobile phones and other devices are usually not worth disclosing for their manufacturers, the better marketing lingo is Three cameras, or maybe the latest standard fast WiFi capable of 867 Mbps! If you would like to know what your phone is really capable of with regards to the maximum output power and willing to dig a bit, look up the FCC protocols by FCC ID of the device :-)

Altiwi cloud console shows you the received signal strength from connected clients. It may help you to determine the qulity of your wireless coverage.

Last note is about density. In dense areas (for example venues or sports fields), where one can expect a lot of people in one place, it is often useful to decrease the output power even further and divide the area into sectors with more access points, than it would be needed just from the coverage and signal strength perspective. Here are the reasons:

  1. The airtime. If you have a big number of people, each with a mobile device in his pocket, often with a laptop running, they always compete over the airtime (as noted in the first part of the article). By placing more accesspoints (on different channels, of course) you’d “dilute” the airtime requirements as the devices will spread over different frequencies.
  2. The SNR. Signal to noise is directly related with the number of devices transmitting and their power. If you just add another access points but would not decrease their output power, you always affect so called noise floor, even if they are not at the same frequency (channel). Think of it as that the boundaries between the channels are always a bit blurry and each transmission on a given channel also affects the other ones, mostly the neighboring channels.
  3. The computational power of your access points. When comes to bare bones, the access point is a computer with wireless network card(s). The number of clients one can handle is not only a function of the wireless throughput we discussed, but also a function of fetures you turn on. Do you want to make any kind of traffic inspection? Expect higher CPU load. Do you want to run advanced authentication mechanisms on the AP, expect higher CPU load. Most manufacturers advertise a recommended number of clients. Base on my experience, it is usually the best case scenario, when the advanced features are turned off and the users are not that much active.

Did you ever wonder why manufacturer advertises 1300Mbps on a device with only 1000Mbps ethernet port?

Conclusion

I hope that this article will shed some light on some of the aspects that come into play when planning wireless networks. I do admit, that I’ve made a number of horrible simplifications in order to keep the topic manageable. Originally the title was WiFi Planning and Surveying, so if there would be demand on the surveying topic, I’d be glad to cover it. Feel free to ask in the comments. Also comment, if I make any mistake — anything written here is my best knowledge but I do not consider myself as an expert in this field, so correct me if I am wrong.

I’d like the reader would take following:

  1. If we would just power up our access points and hope that the network will improve, we would be wrong.
  2. Turn off unnecessary slower bands, if possible, as the beacon transmissions take too much time and eat up your airtime.
  3. The key to the quality wireless coverage is always in more access points with less power.
  4. Most client devices on the market are not capable of catching up with the maximum regulatory allowed output power, so it is futile to use it. At least in network access scenarios. The point to point wireless links are different story.

Hope this helped a bit. If you want to check up our cloud managed WiFi, you can register at https://beta.altiwi.com!

--

--