Accelerating Data Workflow Efficiencies to Advance Wildlife-Friendly Clean Energy

Point Blue Conservation Science’s Accelerator grant partnership with PJMF is focused on the development of data-driven approaches to guide the optimized placement of off-shore wind farms, with the goal of maximizing their energy production while minimizing their biological impact.

By Liz Chamberlin, Sam Veloz, Cotton Rockwood, Martin Magana, and Leo Salas

Point Blue is working with agency and funding partners to compile and analyze all the available biological data, energy potential, and indigenous and commercial uses of the California offshore waters. Despite the climate benefits of one such potential use — offshore wind energy development — this technology could pose risks to the marine environment. Potential environmental impacts include reduced upwelling due to changes in the alongshore wind patterns, disturbance to marine wildlife such as displacement from important migratory corridors or feeding grounds, and direct mortality from collisions with turbines. Economic activities, such as shipping and fishing could also be impacted. As offshore wind leases are offered along the U.S. West Coast and projects are planned and constructed, scientific data analyses are critical and urgently needed to guide siting in a way that minimizes impacts on marine biodiversity and other competing uses while also maximizing energy production.

A critical aspect of this work is the need for increased transparency and accountability for decisions made to promote trust in the process. Our project is a prime example of the practice of open science, where all data and methods are available for anyone to use, and all decisions are made based on fully documented and well-justified assumptions about how the data were applied. This year Point Blue is part of a global cohort working with the Data and Society Accelerator Program to address some of these data workflow issues at our organization.

Our team was fortunate that we were not starting from scratch in thinking about data management at our organization. Point Blue has a fairly well-developed informatics team and infrastructure, but increasingly, we have been struggling with issues around how to maximize our workflows for increasingly complex data analyses such as those involved with our offshore wind modeling work. These challenges stem from using larger and larger data sets of a greater range of types and formats and from needing to explore model uncertainty by scaling the number of model runs, thereby necessitating efficient software and computing processes. Refining our offshore wind model code for batch processing, creating the automated processing environment, and defining the use cases for the impact model and optimization has been the primary work associated with this project so far.

Our initial task was therefore to create a collaborative work environment to facilitate clear and efficient communication across our informatics and science teams so that we could effectively work with the PJMF data engineers who support us as part of this grant partnership. Together, we examined the offshore wind siting pilot model that our team had created previously. We tried to identify (1) what was lacking (2) what needed to be improved and (3) what we could learn from this project that might be transferable to other work across our organization. This third component was especially important, as our primary goal in this program is to make sure that we learn skills that advance our organization as a whole. This initial effort to build communication and partnership and explain our project challenges helped us better define what we would endeavor to accomplish in the accelerator program. It culminated with the development of our tactical roadmap, a concise and highly visual representation of three key tasks we will tackle.

Our entire team — which consisted of our program director, data analyst, software engineer, ecologist, and strategist — was involved in the development of our tactical roadmap. It was important that this was a collaborative endeavor because we found the exercise of building the roadmap incredibly useful in helping our team coalesce around project goals and workflow. Previously, our process has been to conceive of a project and then jump in and start work with minimal communication and coordination among us. This, we now realize hindered efficiency. By constructing the tactical roadmap, we found that taking the time to stop and think through the workflow ahead of time as a group helped prevent problems along the way that might have slowed or constrained our progress. Before employing a tactical roadmap, we sometimes got “trapped in the weeds,” or had misunderstandings within the group about how a particular component or feature would be designed. The tactical roadmap process has seemed like an effective tool so far in avoiding these pitfalls.

For example, when we started this software our engineers had thought of it as a single process with many components. But through the practice of learning and creating a C4 model (basically, a user-friendly approach to software architecture diagramming), we were able to identify three distinct processes that could be further broken down into sub-components. Defining three tasks and deconstructing the steps within each helped us set priorities and timelines that balance task complexity versus benefits from learning new, desired skills. Creating the timeline also helped us identify when we are behind on a particular feature and decide if we should continue investing in it or move on to another feature.

The tactical roadmap also allowed us to think through which steps were critical to the overall project goals, and which tasks might be “nice to haves” but not essential, streamlining the workflow. For example, it would be nice to have a new database structure, such as MySQL, to store all project data, but it’s not necessary because we already use a database structure from a third party that gives us data access via an API. We, therefore, chose for our first task to tackle the automation of data standardization from multiple formats to a consistent raster template which serves as the input for our wind siting model. To do this, we are re-structuring our conversion code so that it 1) can run batch data conversion, 2) includes data and processing checks that will be logged, and 3) is initiated automatically using a YAML file that instantiates a containerized process and runs the conversion within AWS Lambda. This first task is helping us develop skills and knowledge (e.g., containerization, automated process launch) that we will use in the next two tasks of our tactical roadmap.

We have been using our time so far to seek PJMF expertise with major issues that arise in our work. We report progress, note obstacles, and incorporate suggestions from PJMF about areas for improvement. With PJMF staff to tailor specific technical solutions for our challenges and provide advice and guidance on the available computing approaches, a process that often requires extensive learning and trial and error, we’ve adopted a much more efficient workflow. So far participation in this cohort has been very beneficial in helping our team both more effectively manage our current project and think about how we can improve our efficiencies across our organization going forward.

--

--

The Patrick J. McGovern Foundation
Patrick J. McGovern Foundation

Inviting conversations on how AI and data solutions create a thriving, equitable, and sustainable future for all.