Drones : The Flying IoT

Mentat
Mentat Innovations
Published in
7 min readFeb 26, 2016

The fusion of the IoT with Artificial Intelligence is driving a new Industrial Revolution. A key ingredient in this transformation is autonomy: as learning machines gain sophistication and experience, they reach a point where they can be trusted to take decisions on their own, without direct or continuous human control. A great example of this can be found in the form of drones.

At Mentat we view drones as an agile flying platform for advanced sensors, hence the “Flying IoT”.

Consider the example of a remote wind farm that requires regular visual inspection of the wind turbines to detect and monitor cracks on the blades. It used to be necessary for an engineer to regularly conduct personal inspection visits to the site. A safer alternative would involve camera drones, remotely controlled from the ground by a qualified engineer. In this setup however the human remains the bottleneck: physical presence of a suitably qualified engineer/pilot is still required, a costly proposition.

An altogether superior solution would involve an autonomous drone, scheduled to perform a visual inspection of the entire farm at regular intervals, paying special attention to existing cracks, and optimising its flight path to take into account wind conditions, which can be particularly challenging in a wind farm scenario due to local vortices produced by the turning blades.

Multiple other use cases will be revolutionised by the advent of autonomous drones: inspecting forest areas for early detection of wildfires; surveillance in security deployments; collecting traffic statistics and managing first responders in urban environments; deliveries of products in an industrial or retail scenario, or critical supplies in an emergency management scenario. The list is endless and every day seems to bring another great use case for drones.

However, autonomous does not and should not mean entirely unsupervised. Miscalculations or faults in a drone’s on-board logic might lead it to fail its objective, or even to become a liability. The question then becomes: as autonomous drone technology scales, what sort of monitoring technology is able to scale with it and ensure safety and quality control without introducing bottlenecks ? Can we learn automatically from the trajectories and provide an intelligence layer for drones ?

The aim must be to minimise the human-to-drone ratio in any use case. One can envisage an alerting system that monitors all drones, prioritises them in terms of degree of concern, and asks for human input only in the most urgent cases.

Analogous alerting systems are already in place in other areas where software agents enjoy some degree of autonomy, such as robotic installations in manufacturing plants, and IT/cybersecurity. However, drones are an idiomatic case as they generate masses of live and fast geo-temporal data in the form of their 3D GPS tracks. Despite the abundance of GIS systems for storing such data, powerful solutions for analysing it are not available. This is a common theme in Big Data: easy to store, much harder to analyse ! Furthermore analysing this data must happen at the drone level (at the edge of the network) rather than using a data downlink (i.e. sending data to an on-premise or cloud based server).

The current state-of-the-art involves a technique known as geo-fencing, where basically a specific hard coded area of space is manually specified in the monitoring platform, and if any drone escapes that area an alert is generated. This technique cannot scale. If one tries to optimise the geo-fenced area, the manual configuration step will in effect fix the drone trajectory, which stands in the way of drone autonomy (a good analogy to keep in mind is the difference between a self-driving car, and a tram which runs on predefined rails).

The missing ingredient is a system designed to monitor intelligent agents. That system must itself be intelligent! Our solution is able to learn the preferred trajectories of each drone from the GPS tracks they generate, without any need for manual configuration.

When that trajectory profile in geospatial or temporal terms is violated, an alert is generated. Such a system offers incredible flexibility. First, it decouples monitoring from drone configuration, so that if a drone is suddenly reassigned to a different task, the system will initially raise an alert but then quickly adapt to the new route without the need for reconfiguration by a human. Second, it is able to profile drones controlled by entirely separate systems or entities, as long as it’s able to catch a glimpse of their GPS tracks. The learning algorithms can detect abnormal behaviour both at the macro level (“this drone is heading into territory it has never accessed before”), and the micro level (“this drone is performing odd manoeuvres that might indicate loss of stability or malfunction”). Some of the common shortcomings of geo-fencing are naturally overcome by our platform: for example, we can detect abnormal direction and/or speed, not just location (i.e., when a drone is within the bounds of its normal trajectory, but is moving in the reverse direction than usual). We can also grasp manoeuvres that surveillance drones typically employ to cover an area (such as the lawnmower or spiral manoeuvres — see use case 2 related to agriculture).

We have paid all due respect to the great forerunner of our system: target tracking software, but extended that methodology massively. Broadly speaking, the mathematics that underlie classical target tracking solutions are exceptionally accurate in forecasting trajectories of ballistic objects (where an initial or constant force defines the trajectory of an object such as a missile), or for short-term forecasting of autonomous objects with known constraints on their manoeuvrability (consider the difference in a fighter jet’s evasive manoeuvre versus the agility of a bumble bee). To track drones one must instead use more flexible methodology, inspired by advances in machine learning.

Below we include two video demonstrations. We make use of a 3D Unity front-end coupled with our streaming machine learning engine. This allows us to perform live demos, but it also expresses our view that a VR front-end is the right choice here: human supervisors need to understand drone trajectories in the context of the physical terrain they are navigating, but a constant live video feed is unrealistic due to both battery limitations, bandwidth constraints and cybersecurity issues.

In the above video, we demonstrate the ability of the system to learn a 3D trajectory from scratch. The cones in the video indicate the track the user is expected to follow using the controller (as a way of helping the “pilot” visualize the track), but the system has no prior knowledge of that, which explains why every time the drone turns during its first time round the track an alert is generated. However, as the drone repeats the track, the frequency of alerts (show in the bottom left corner as a time percentage and visually in the bottom right corner) decreases dramatically. At the end of the video, an excursion of the drone outside its typical trajectory is immediately flagged as anomalous, even while the drone still remains at the interior of the tracks, where this departure would have been missed by a classical geo-fencing solution.

We also show here the ability of the algorithms to understand patterns, which is a common set of manoeuvres employed by drones when they are trying to cover an area. Above is a a real GPS track from a drone in an agricultural use case, where a so-called “lawnmower” pattern is employed to cover the area. Here we focus on a collision alert use cases, where one drone (depicted in red in the video) is attempting to cross a region which is currently being surveyed by another drone (depicted in green in the video). The trajectory of the drone is determined autonomously, depending on its objective, the weather conditions, its battery power, as well as potentially other more complex criteria. The objective of the monitoring agent is to forecast potential collisions between the two drones.

Clearly the challenge here is to understand the recurring lawnmower pattern, typical in precision agriculture cases. Although the pattern is quite clear to a human, it is a great challenge to automated forecasters — it switches from linear (during take-off) to a recurring pattern, it involves a 3D diagonal movement which renders it asymmetric, and is awash with small departures/delays caused by wind. Moreover, the use case requires us to pick up the pattern very quickly, after just one or two repetitions. These features and requirements virtually incapacitate any classical “periodicity detector” that can be found in off-the-shelf time series or target tracking packages.

Real Time Geospatial Trajectory Forecaster

The detector instead does a great job picking up up the repeating pattern very early. The smoothness and accuracy of the pattern improves over time as the drone settles into its pattern, and can extend over very long horizons without losing accuracy. This in sharp contrast to most commercial target tracking software that are only able to forecast long horizons when the target is exhibiting what is technically known as “second-order stationarity”, a practical translation of that would be that the “steering wheel and gas pedal” are held in a fixed period (constant angular and linear acceleration).

It’s important to note that we don’t raise an alert as soon as the two forecasts start intersecting. Our prediction is in fact 4D, since it takes into account time (i.e., the speed of the drones). Therefore, forecasts are allowed to overlap as long as the drones never occupy the same space at the same time. That is why there is an alert for one of several intersections between the light blue and light red forecasted trajectories — and indeed that is the only one which would have led to a collision, as is evident near the end of the video.

The action that would follow a collision alert depends on the use case. It could trigger a human override in a centralised control scenario, or interact with the software agents in the drones in a distributed control scenario to avoid potential collisions. In this video simulation the red drone avoid the collision by moving higher temporarily. This capability becomes particularly useful in scenarios of multiple drones in challenging environmental conditions.

We are particularly grateful to have had the support of Ordnance Survey (Geovation) on the geospatial modelling side and of InnovateUK on the augmented & virtual reality side to bring this higher risk feasibility study to fruition. We are working on a number of drone data (raw and sensor/imaging) related projects which we will share soon.

--

--