Platforms for imagery

There are many different platforms used to take images of plants. The most commonly used by scientists are satellites, and airplanes, but UAV technology is becoming more available as UAVs and their cameras become more affordable. Each of these different platforms have advantages and disadvantages. For example, there are lots and lots of satellite images that are freely available from NASA (thank you NASA!) that capture large areas at regular intervals, but often at low temporal or spatial resolution. This means that each pixel of an image can be tens to hundreds of meters in size, and could be collected weeks apart — which, depending on what you’re trying to measure, could be your perfect solution, or project killing problem. Also, because satellites are high in the atmosphere, atmospheric interference can cause havoc in the data you’re trying to collect — just ask any remote sensing graduate student or scientist about clouds and sit back and enjoy a 30 minute diatribe about water vapor. And its not just the clouds that are problems — cloud shadows introduce variation as well. As an example, check out these satellite images of central California that were obscured by smoke, and then the remnants of a tropical storm over this past Labor Day weekend (2017). Imagine the trouble you’d be in if this was a critical time for data collection on your garlic farm in Gilroy! Some atmospheric interference can be corrected, but these corrections can introduce uncertainty into your data.

Holy atmospheric interference, Batman!

Imagery taken from an airplane has less atmosphere between it and the ground than a satellite (planes are closer to earth), but still has atmospheric interference which can change day to day depending on the weather. There are other challenges with taking images from airplanes — for example if you were taking thermal imagery, the temperature of the thermal camera body can change the measurements of the camera which means that altitude and atmospheric conditions on the day of the flight could introduce variation into your data (never mind turbulence!). Cloud cover can still be a problem — but at least an airplane can get below the clouds (though you may now need to account for reduced incoming solar radiation in your measurements than will change measured reflectance). However, aerial imagery generally has higher spatial and temporal resolution than available satellite data. But you would need to hire a crew to make flights, purchase these images from a company, or pilot the aircraft yourself to acquire these data.

UAVs can provide high resolution imagery for whenever you want to fly — this potentially means extraordinary spatial resolution (we’re talking millimeters) wherever and whenever you want it. But, like airplanes, you need to fly them yourself or hire a company to come do it. I should also point out that there are some regulatory hoops to flying your own UAV professionally. Also, the images collected from UAVs are closer to the subject (like a vineyard), which means that all of the plants in your images are at a more variable perspective than those from a plane or satellite which is overhead.

Sorry I didn’t go to art school, but I hope this conveys at least a little information beyond “Burke can’t draw”. The point is that A is WAY smaller than B: if the satellite is 700 km, and the drone is 30 meters above the grapes, and the distance from the most northerly and most southerly vines is 10 meters, A is ~0.0008°, and B is ~20°.

Changing and variable perspectives can lead to more variation in UAV images than satellite or airplane-based images. As a hypothetical example, if the sun is brightly shining from the south (and to the right in the above drawing), plants to the south (right) of the UAV could appear to have more shadows because they will be more backlit relative to the camera, and plants to the north (left) will appear to have fewer shadows because they are more sunlit relative to the camera. It is also possible that some plants to the south will also be brighter (ie. have a greater maximum reflectance) because they will be reflecting more sunlight directly at the camera, and some plants to the north will be less bright (ie. lower maximum reflectance) because they will be reflecting more sunlight away from the camera. This can again mean more spatial and temporal heterogeneity (ie. noise) in your data. This is true in satellite data, too, but at a much lesser extent because the difference in angles between the satellite and each plant in your area of interest is much less in a satellite image (A < B) than a UAV image. Across entire satellite image scenes, these types of problems are typically corrected, but are present in the raw data.

However, despite the broad use of satellites and airplanes, and recent use of drones, I would bet that the most utilized platform across the world for taking pictures of plants is merely a handheld camera — just think of how many photos of fall colors, mountains, and fields, are taken every year. Most everyone has a camera on their cell phone, or a digital camera in their closet. They are cheap to buy, and easy to use and have a huge potential for measuring plants — but are under-utilized. More on this later.

Gothic Mountain, Colorado, near the Rocky Mountain Biological Laboratory where I did fieldwork for my PhD (photo credit: Burke Greer).