This excerpt is based on the paper we turned in at the 35th AAAI Student Abstract and Poster Program .
In recent years, wildfire has become an unavoidable natural disaster that continues to threaten fire-prone communities. Due to the ongoing climate change, global warming, and fuel drying, the frequency of devastating wildfires increases every year . The consequences of massive wildfires are brutal. For instance, in 2003, wildfires that occurred in San Diego County burned over 376,000 acres and 3,241 households. This sums up to approximately $2.45 billion in terms of total economic costs . Traditional, physics and empirically-based wildfire spread models have been continuously studied to mitigate losses resulting from wildfire. However, these models often require extensive inputs. Thus, we present a deep learning method to determine dynamic wildfire profiles with basic input data: historical wildfire profiles, weather, and elevation data.
Model and Implementation
U-Net was first introduced solely for the purpose of image segmentation on biomedical images. The model has two major paths. It first begins with contraction, which consists of convolutions and maxpool to extract features of the image. Next, the model undergoes expansions where the size of the image resizes back to the original input to enable precise localization.
We decided to utilize the architecture of U-Net because 1) the model allows us to input an image and output a precisely segmented image, 2) it works well with a small dataset, 3) and it is capable of predicting a wildfire profile within a second.
The U-Net model is adjusted so that it becomes more applicable to our study. Similar to the U-Net, WildfireNet is composed of two major paths: contraction and expansion. A sigmoid activation function is used at the last layer to output a probabilistic distribution of fires in the image. Binary classification is performed on each pixel of an image to determine whether there is a fire or not. Thus, binary cross-entropy is used as a loss function to train the model. To create a predicted binary map, an optimal threshold is set to label pixels with fire or non-fire.
In contrast to the U-Net, WildfireNet consists of fully connected layers at the bottom of the architecture. After the last downsampling, the 3D image is flattened into a 1D array, and weather data is added. The model is further trained with dense layers to learn the effect of weather variables in its prediction. Furthermore, past wildfire profiles can play a dominant role in the future shape of the wildfire. Therefore, 3D CNN was used instead of 2D. In 3D CNN, the model further extracts features from both the temporal and spatial dimensions, whereas, in 2D CNN, the model only focuses on spatial features . In this study, 3 previous days of wildfire profiles are combined to convert input images from 2D to 3D. This allows the model to have a better sense on how historical fires are correlated to the fire on the next day.
Dataset and Preprocessing
Dynamic Wildfire Perimeters
A total of 302 daily fire perimeters were retrieved. The size of the data is limited compared to other deep learning studies. However, WildfireNet, derived from U-Net, is proven to perform well with small datasets . Wildfire perimeters were obtained from NIFC FTP Server1. Perimeters are in the .kmz file, which contains an array of coordinates of boundaries. In this paper, only the fires that occurred in California from 2013 to 2019 were considered.
For each wildfire perimeter, as shown in Figure 3, an array of coordinates was used to fill inside the perimeter to create a binary map to reflect the overall shape of the wildfire. In other words, if a given pixel is within the perimeter, the pixel is labeled as 1 to indicate a fire. However, if the pixel is outside the boundary line, the pixel is labeled as 0 to indicate no fire. An Important assumption was made when creating a binary map. In some cases, there were spots of the region within the boundary that was not on fire. These spots were considered as a risk zone and were filled in as well.
Overall, a preprocessed binary map is used as an input to represent the wildfire profile. The binary map has a resolution of 0.5 degrees in both latitude and longitude. Maintaining the same resolution for every fire is important to distinguish fires in respect to their sizes.
Elevation can imply important characteristics of the location of the fire, such as the composition of vegetation, rate of fire spread, and changes in temperature .
1/3 arc-second digital elevation model (DEM) was retrieved from the USGS national map2 to reflect the elevation of locations of fires. Since DEM contains an elevation value for each pixel, it gives an idea to the model on how elevation can affect fire spread. For each wildfire event, coordinates at the corners of the binary map were retrieved and used to form a clipping box to extract DEM. In this way, the DEM is correctly located with respect to the location of the fire. Furthermore, elevation values were normalized to keep the values in the same range as other data. Pixel-wise elevation data is added alongside the profile. Thus, each input image consists of two channels: binary map and elevation.
Weather plays a significant role in the spread of fire . Weather data was retrieved from CEFA-WFAS FW13 Fire Weather Data File Interface3. The closest in-situ weather station was chosen for each wildfire event. Daily average temperature, relative humidity, wind speed, wind direction, gust speed, and gust direction were considered as weather variables. These were added to the model to inform the atmospheric condition during the period of the wildfire.
Weather data were normalized to zero-center the data around the origin and each dimension is scaled by its standard deviation. This preprocessing allows each variable to have equal learning weights to the model.
To compare the performance of WildfireNet, a simple logistic regression model was built as a baseline model. The input includes historical binary maps, elevation data, wind speed, wind direction, and the state of the surrounding pixels.
The state of the surrounding pixels is essential because a pixel will have a higher probability of setting on fire in the future if one of the neighboring pixels is already on fire. In fact, there are a total of 8 neighboring pixels that contribute to the state of the pixel, as shown in Figure 4.
Intersection over Union (IoU) and recall were calculated to evaluate the model’s performance in predicting the profile of the wildfire.
IoU is a common metric to be used in a segmentation mask. The metric is pretty much straightforward once a predicted image and a ground truth image are defined. In this study, both images are binary maps and the area of overlap is simply the number of pixels that have the same value in both images and union is the area encompassed by both images. In other words, a predicted image that matches perfectly to the ground truth will score IoU of 1.
WildfireNet achieved an IoU of 0.997 in the test set, while the baseline model scored 0.913. The result indicates that WildfireNet is excellent in precisely labeling each pixel with the presence of fire.
However, in the test set, only 5 percent of labels in the binary map are labeled as fire. IoU is not the best metric to evaluate in such an imbalanced dataset because the model can obtain a high IoU score by simply predicting every pixel to be non-fire.
Therefore, only the pixels that were labeled fire on the next day but not on the current day, vice versa, were extracted and evaluated. We defined such pixels as changed pixels. Considering only the changed pixels will measure the model’s performance in predicting the changes in the wildfire profile.
The output of WildfireNet shows a probabilistic distribution of fires occurring at each pixel as shown in Figure 5. If the model is confident that there is a fire in a certain pixel, it will assign a low score on the pixel. The model also projects how a fire will spread in the future. For example, in Figure 5, the model predicts the fire will enlarge on the right side of the boundary, whereas, not so much on the left side. In fact, the comparison between the current day to the next day shows that the actual fire of the next day didn’t expand elsewhere except on to its right. This validates the model’s capacity of predicting the growth pattern of wildfires.
On the test set, WildfireNet performed better than the baseline model in both expanded IoU and recall. Both models scored the lowest in expanded IoU because the metric further penalizes when predicted fires are not present in the actual fire. WildfireNet achieved 0.517 on recall while the baseline model scored 0.152. This implies that WildfireNet predicts correctly half of the time how fires are growing in the actual fire expansion, while the baseline model is only correct about 15% of the time. Moreover, when inspecting each fire independently, WildfireNet scored a recall of around 0.75 for fires that grew slowly.
We do think the current method of incorporating weather variables alongside the fire profiles has limitations. Also, a small dataset might not be enough to allow models to understand the relationship between fire profiles and weather informations. We are hoping to find a better way to instill unstructured variables into the model
Unlike traditional, physics, and empirically-based spreading models, WildfireNet does not require extensive inputs. The result shows that WildfireNet is capable of learning patterns from historical fire spread, topography, and weather to predict the wildfire profile of upcoming days. Overall, WildfireNet is a novel wildfire spread model and has the potential to be a tool to aid firefighters in their decision-making.
Please comment if you have any thoughts/ideas on improvising WildfireNet!
Thanks guys for reading this post :D
Diaz, J.M. 2012. Economic Impacts of Wildfire. Southern Fire Exchange.
Halofsky, J. E.; Peterson, D. L.; and Harvey, B. J. 2020. Changing wildfire, changing forests: the effects of climate change on fire regimes and vegetation in the Pacific Northwest, USA. Fire Ecology 16(4). doi.org/10.1186/s42408–019–0062–8
Radke, D.; Hessler, A.; and Ellsworth, D. 2019. FireCast: Leveraging Deep Learning to Predict Wildfire Spread. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial intelligence Main track:4575–4581.doi.org/10.24963/ijcai.2019/636
 Estes, B. L.; Knapp, E. E.; Skinner, C. N.; Miller, J. D.; and Preis ler, H. K. 2017. Factors influencing fire severity under moderate burning conditions in the Klamath Mountains, northern California, USA. Ecosphere 8(5). doi.org/10.1002/ecs2.1794.
Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv preprint. arXiv:1505.04597 [cs.CV]. Ithaca, NY: Cornell University Library.
Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning Spatiotemporal Features with 3D Convolutional Networks. arXiv preprint. arXiv:1412.0767v4 [cs.CV]. Ithaca, NY: Cornell University Library.
Sung, S., Li, Y., & Ortolano, L. (2021). WildfireNet: Predicting Wildfire Profiles (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15905–15906.