From the Open-Earth-Monitor Global Workshop 2023: key takeaways and what to expect in 2024

OpenGeoHub
Nerd For Tech
Published in
20 min readDec 1, 2023

Prepared by: Tom Hengl (OpenGeoHub), Gilberto Camara (OEMC project Stakeholder Committee lead), Leandro Parente (OpenGeoHub), Carmelo Bonannella (OpenGeoHub), Luca Brocca (CNR), Gregory Duveiller (Max Planck Institute for Biogeochemistry), and Martin Herold (GFZ)

The Open-Earth-Monitor Global Workshop has been held at EURAC Research institute in Bolzano, Italy from 4–6 October 2023. The Open-Earth-Monitor project gathers open source development communities that aim at building solutions for accurate and cost-effective monitoring of the state of the environment; this is done through 30 use cases focused on concrete user communities and organizations. The Global Workshop was a joint meeting with EuroGEO 2023 running back-to-back at the same venue and hence also saving a lot of unnecessary travel / carbon footprints. We present here some main takeaways from the numerous discussions and try to predict what you can expect from the science and technology fields in 2024. Video-recordings from the Global Workshop are available for viewing under the CC-BY license; links to various development projects on Github and similar are also provided.

Open-Earth-Monitor project

Open-Earth-Monitor Cyberinfrastructure (OEMC) is a Horizon Europe project running from 2022–2027. We are organizing annual meetings under the name “Open-Earth-Monitor Global Workshop” at various locations across Europe to help network with other projects and test releasing some of the tools and data we are producing. Why own conferences and not simply join existing large meetings such as EGU or similar? Since we are a data science and application driven project, we realized that none of the conferences we were aware of at the moment would be satisfactory for our purposes as we have specific design and communication needs. In addition, we have set the following minimum standards for hosting a Global Workshop:

  1. All talks are video-recorded in high quality (screen-recording + person recording) and provided as open information via https://av.tib.eu/publisher/OpenGeoHub_Foundation or similar (with a permission from presenter / authors).
  2. Opinion polls and chatting tools are provided (e.g. Slido, Mattermost) to all participants, so both anonymous and personalized feedback can be collected.
  3. Meeting is a combination of a diversity of (1) talks, (2) discussion forums, (3) hackathons and (4) workshops; instead of having only 1–2 types of events.
  4. Information provided is shared under Open Science / FAIR research principles (apart from the private / confidential / proprietary information).
Participants of the Open-Earth-Monitor Global Workshop 2023.

The Open-Earth-Monitor Global Workshop (in a nutshell: open source EU communities building solutions for accurate and cost-effective monitoring of the state of environment) has been held at EURAC Research institute in Bolzano, Italy from 4–6 October 2023. This is the 2nd open meeting of the project (1st one was held in Wageningen, June 2022) and we were happy to receive almost 150 registrations with over 40 talks, workshops and hackathons. All video-recorded talks are available for viewing (also listed on our https://earthmonitor.org/knowledge-hub/). The programme of the meeting is still available from our pretalx installation.

Key takeaways from the meeting

Although there were many interesting discussions and most of the participants (including us authors of this post) could only follow about 40–45% of talks as they were running in parallel, we came up with 12 most interesting takeaways that have caused the most stir and discussion listed below unsorted. For even more discussion please refer to our Mastodon account provided below.

#1: Reproducibility of many decision-ready data sets produced is worse than you think

In her opening keynote, Julia Wageman reviewed the top 5 challenges that are related to finding, accessing, and interoperating big Earth data: (1) limited processing capacity; (2) growing data volume; (3) lack of standards for dissemination; (4) proliferation of data portals: (5) limited capacity of data discovery. Another big challenge of processing EO data is also the reproducibility of research results; discussed also in detail in Julia’s paper “Five guiding principles to make Jupyter notebooks fit for EO data education” (Remote Sensing, 2022). Her findings (which match also findings of other similar research) indicate that, although Jupyter notebooks are widely used for sharing scientific results, only 25% of more than 1 million notebooks could be actually executed with only 4% of notebooks producing the same results.

Many EU data science projects suffer from limited reproducibility. Julia reminded us that only 4% of notebooks on Github produce the same results as documented (based on the Pimentel et al., 2019).

Why is the reproducibility of research results such an issue? We can only guess, but we can assume that to provide fully reproducible results many authors would definitely need to increase their knowledge of Docker, Continuous Integration and Delivery and similar. All these are quite complex and require computer science expertise at a high level, which many researchers unfortunately do not share (we are all a generation living in the digital era, but often self-taught in computer science!). To start learning about tidy coding and full reproducibility, we advise referring to Peikert et al. (2021), and/or Reproducible Data Science with Python and/or the Reproducible Science Curriculum. For those interested in how reproducible workflows can be implemented on open EO data, we recommend also watching the tutorial by Edzer Pebesma: “Reproducible and Reusable Remote Sensing Workflows” (see below).

Edzer Pebesma: “Reproducible and Reusable Remote Sensing Workflows”. Many scientific journals unfortunately do not require any level of reproducibility in the peer review process. Many authors would be happy to share their code and document analysis and results, but there seems to be no or little support for this from the publisher’s side (beyond providing links to Github or similar).

#2: Changing time for space OR how temporal analysis of time-series can help increase accuracy of predictive mapping

One of the recurring discussion points during the Global Workshops is that we under-utilize time-series analysis in the processing of remote sensing images. Temporal analysis involves studying “temporal signatures” — distinctive patterns over time in features like vegetation types, land cover, cropping systems, and land use changes. Each land cover class and land use system produces different (often unique) temporal signatures: the finer the temporal resolution, the more clearly these signatures can be recognized.

Two notable approaches were presented in the Workshop, the sits package and the “Dynamic Time Warping” (DTW) method. The sits package, presented by Gilberto Camara, employs a temporal analysis-first strategy where 1D deep learning algorithms can be used to classify temporal signatures. Victor Maus in his talk, showed that the DTW method, when combined with the first nearest neighbor (1-NN) classifier, can enhance classification by exploring the distinctive patterns over time of land cover classes.

Gilberto Camara: time-series of various biophysical indices show distinct patterns (signatures) and these can be used to distinguish between land cover / land use systems.
Victor Maus: “Overcoming Data Scarcity in Land Use Monitoring with Time-Weighted Dynamic Time Warping”. Each vegetation type, each land use system often produces specific patterns in the temporal domain. Using such analysis to classify land cover types and land use change classes is in general under-used in geospatial sciences.

However, these approaches come with computational demands and a drawback: it tends to overlook the spatial proximity of features, neglecting geographical closeness. What seems to be increasingly interesting are the methods that are: (1) integrating both temporal signatures and spatial objects i.e. spatial connectivity, (2) that are fast, and (3) can be universally applied across the globe.

#3: Using GEE or similar Big-Tech cloud infrastructure, everyone can now map world land cover at high resolution including on their laptop, but do we need so many land cover / land use products?

Land cover, land use mapping is among the most fundamental areas of application of EO. Since some major infrastructures such as Google Earth Engine and similar provide easy access to complete copies of EO archives, everyone can today upload some referent training points to GEE and produce land cover maps of the world. Venter and Sydenham (2021), for example, made a land cover map of Europe at 10-m spatial resolution directly in Google Earth Engine just by uploading the 70K LUCAS points: “within the Google Earth Engine cloud computing environment, the ELC10 map can be generated from approx. 700 TB of Sentinel imagery within approx. 4 days from a single research user account. The map achieved an overall accuracy of 90% across eight land cover classes”. Generating high spatial resolution land cover products is becoming increasingly easy — you only need a laptop and a Google Earth Engine account, knowledge of Javascript and some patience (or use multiple GEE accounts in parallel?). Do we need “a flood” of land cover / land use products and do these completely fragment the field / confuse users too much? Does this help the field or is it slowing us down?

Gilberto Camara: EO for compliance? High accuracy is crucial for uptake by policy makers and executive agencies.

Using some examples from Brazil, Gilberto Camara demonstrated how some global products (that nominally have high classification accuracy) can be of limited use at national level i.e. for official reporting. Camara: “if it does not meet the golden standard, it will be of little use”. So yes there is today as easy as ever to produce global predictions; but to produce data sets that will go into policy is much more difficult and requires extreme accuracy (e.g. per class error <5%) which in the case above is achieved by experts manually checking all results by comparing various GIS layers (many best, most accurate ground-truth layers are only available locally e.g. within some local government agency).

In summary: all mapping and modeling efforts are useful (George Box: “All Models Are Wrong, Some are Useful”), some are best at some aspect, some are already suboptimal (hence use only for sandboxing or demonstration); the task for the research community is to find robust ways to use the best of the multitude of data products e.g. through ensemble methods and a data fusion approaches.

#4: Multiscale products (global, continental, national, regional) can be combined as long as similar / compatible standards are used

“I still see people in geospatial continue to believe that one unique data source solves some major issues by itself. While the reality is that data fusion is the key to delivering the actual value” Mykola Kozyr

We saw during the Global Workshops number of talks that basically overlap in focus of work (e.g. land cover mapping, forestry inventories, canopy height mapping, LiDAR surveys based on drone UAV etc) but then differ in geographical bounding box going from global, continental, national to regional and local modeling (see e.g. drone-based hyperspectral as in Thomas Maffei: “Use of airborne hyperspectral images in support to Alpine forest managers”). A recurring question at the Global Workshop was: can these muti-source & multi-scale mapping and modeling efforts be combined / integrated? Some products at significantly different resolutions are maybe difficult to integrate as often they carry completely different components of the signal. For example, to downscale 1 km images to match spatial resolution of Sentinel / Landsat products (10–30 m) is a long stretch, but to combine 5 km and 1 km images or 250 m with 30 m is definitely something we would approve of. Martijn Witjes and Luka Antonic presented a workshop explaining how they build integrated data cube for continental Europe (see below); Robert Masolele from Wageningen University combines both local and cloud-based data into a single workflow for modeling deforestation drivers across Africa.

Martijn Witjes / Luka Antonic: EcoDataCube eu An open environmental data cube for Europe. Integrating multisource data, especially multisource EO images is a complex task.
Robert Masolele: “Mapping the diversity of land use following deforestation across Africa”. An example of a project where multisource EO data from 5 m resolution to 30 m resolution or coarser was combined into a seamless mapping system.

#5: Absolute “Analysis-Ready-Data” does not really exist, but yes some data is more ARD, more useful within a specific context

Peter Strobl from the European Commission’s JRC gave an interesting keynote at the Global Workshop 2023 entitled “Analysis Ready Data” (ARD) and which deeply questioned what exactly is Analysis-Ready and what does this mean within the context of interoperability and specific EU programmes. Amount of available geospatial data increased exponentially in the last decade and many of these are calling themselves ‘ARD’. However, users relying on their interoperability are often overwhelmed by their diversity and remaining inconsistencies which often require considerable effort before appropriate data can be selected and joined sensibly. Access to proper reference data and benchmarking methods is therefore an important factor for leveraging ‘ARD’ tag.

Peter Strobl: Analysis Ready Data only exists within a specific context. Absolute ARD does not really exist. Some standards that are pushed as ARD are not always the best solution.

Strobl suggested that we need common reference standards for ARD and open benchmarking protocols where anyone can demonstrate that their data is indeed ARD. These considerations are now taken up by a formal ISO/OGC Standard Working Group (SWG) which was launched recently. The discussion from this keynote was probably the longest and spanned from data formats, projection systems (hexagonal Discrete Global Grid Systems, or DGGS) and policy issues connected to data being incomplete or unfit for use.

#6: The EO industry has limited interest in sharing analysis-ready data — is this the business model that we need?

In the last decades there has been an increasing interest by industry in building large (cloud-based) data lakes with EO data (Petabytes of storage) and then providing commercial services where users can connect with their laptop, program and execute analysis, then even build scalable apps for thousands of users. Some prominent examples of cloud computing services for public EO data (Landsat archive, Copernicus Sentinel 1,2,3 archives, ERA5 etc) include: Google Earth Engine (>20PBs), Microsoft Planetary Computer, Amazon Landsat / Sentinel AWS; in Europe there is also EODC, WEKEO, Creodias, EuroDataCube, Terrascope and similar. Most of these services provide access to “raw” or original data provided by the ESA / NASA, then profit from users using their computing infrastructures (paying per month, per hour of computing or per service) to build value-added products on top of the raw data. This, unfortunately, leads to many groups unknowingly running more or less the same analysis over and over again, so that the industry can profit N times even where the clients are from the same organization. Is it efficient, is it even moral, to have multiple groups pay exactly the same processing to produce data used for land restoration and nature conservation projects? This seems to be one of the biggest challenges of the modern commercial sector and was also quietly discussed among the workshop participants — should the EO industry also be blamed for high and inefficient computing and energy consumption and for profiting on land conservation projects? Are big cloud services part of the problem or part of the solution to climate change and land degradation? On the other hand, if the EO industry would only serve what they consider analysis-ready images (highest readiness level), then this would limit reproducibility of workflows and could also propagate some biases: every gap-filling, aggregation method carries some bias eventually.

In summary: although we do need to have access to raw EO data so we can always check and improve and reproduce all results, having many groups recomputing basically the same gap-filling, cloud removal, harmonization or similar, is inefficient and eventually brings no long-term gains. Commercial EO companies should aim at serving a diversity of products at different Technological Readiness Levels from raw original data they receive from ESA/NASA to fully harmonized and analysis-ready complete consistent data.

#7: Increase in forest harvesting in Europe after 2015: a controversial topic but a good lesson for EO applications

In 2020, Ceccherini et al. published a paper on Nature with title Abrupt increase in harvested forest area over Europe after 2015. Using the Global Forest Change (GFC) maps from Hansen et al. (2013), Ceccherini et al. prepared remote-sensing based estimates of harvested forest area over Europe, to compare with the national statistics on harvest removals officially provided by the individual countries. The analysis revealed large inconsistencies between remote-sensing estimates and national statistics, particularly after 2015 and for Nordic and Baltic countries, with an increase in harvested area over the continent of about 49%.

This claim started a long discussion in the media and in scientific journals. We hereby report only the major points. A first comment was published in 2021 by Palahi et al. Concerns about reported harvests in European forests. On top of pointing out minor inconsistencies or unclear points of the study (i.e. inclusion or exclusion of forest disturbances from logged areas), they suggested that the flaw of Ceccherini et al. (2020) analysis was not taking into account in their estimates the major enhancement received by the GFC algorithm in 2015. In summary, Palahi et al. in their comment argued that the abrupt changes were largely artifacts stemming from a temporal inconsistency in the retrieval algorithms used to generate the underlying time series of tree cover. This type of error could have a major impact on policies.

Alessandro Cescatti: the detected increase in forest-harvesting in Europe is in fact real and not just an artifact of the underlying retrieval. This study is an example of significant results that generated controversy because the underlying input data and associated preprocessing algorithms were not fully transparent. Only after three years an independent evaluation confirmed that the observed trends and patterns were real.

In the response to Palahi et al., Ceccherini et al. (2021) stressed that the change in the algorithm in 2015 was not documented and the code itself is not available, therefore making it impossible for the users of the tree cover data to check the temporal consistency of the time series. In parallel, Ceccherini et al. ran a calibration exercise with sample plots that has led to a reduction of the estimates, but still confirms the large increase in harvest area in recent years. In 2022, Breidenbach et al. commented with another study, titled: Harvested area did not increase abruptly — how advancements in satellite-based mapping led to erroneous conclusions. In this new study, using more than 120,000 National Forest Inventory (NFI) points scattered across Finland and Sweden, Breidenbach et al. stressed that what changed in 2015 wasn’t the harvested area but only the map’s ability to detect harvest areas. In the following reply by Ceccherini et al. (2022), the authors discussed the potentials and limitations of NFI plot data for wood harvest estimations and concluded that ground truth and remotely sensed data need to be combined to achieve robust estimates. Unfortunately, ground data from NFI inventories are typically not shared with the accurate coordinates of the plots that would be needed to fuse the ground observations with the satellite observations.

While the discussion has laid dormant until recently, a recently published study on Remote Sensing of Environment by Turubanova et al. (2023) titled: Tree canopy extent and height change in Europe, 2001–2021, quantified using Landsat data archive reports that after the year 2016, the tree canopy extent in Europe indeed declined, with the highest reduction observed in Fennoscandia (3.5% net decrease). Turubanova et al. (2023) show that the recent decline in tree cover is due to a sharp increase in harvest (see Fig. 7 in Turubanova et al., 2023) of a similar magnitude, spatial pattern and timing as that reported by Ceccherini three years before.

The bottom line is, also emphasized by Alessandro Cescatti, the devil is not in EO data but in our code — we need more transparent and shared data and codes to produce robust and less controversial assessments.

#8: Green technology and entrepreneurship meets spatial intelligence: ESA’s GTIF is being scaled up

Patrick Griffith from the ESA has presented how EO data can be used to support Green transition projects at National level (Austria). We are talking about very concrete tools that have been designed to support national agencies and citizens and where data is as detailed and as accurate as possible: planning locations for solar panels, wind turbines, reforestation projects and similar. The Green Transition Information Factory (GTIF) is a key component of the Space for Green Future (S4GF) Accelerator and the wider ESA strategy to address the Green Transition (GT) and the required transformation of economy and society towards a sustainable and carbon neutral future.

Patrick Griffiths: “The ESA Green Transition Information Factories (GTIF)”.

ESA has opened a tender for several more countries to build similar systems so that eventually every EU country should have a compatible system that can directly support EU Green deal and transitioning of citizen, organization and industry to low-to-zero carbon emissions, optimized energy efficiency and nature conservation. It is an exciting development and we should all do everything in our power to speed up this process as time to transition is now!

#9: Modeling carbon fluxes is dependent on spatial and temporal support and often requires non-linear models

In his talk on GPP (Gross Primary Productivity) estimation for greenhouse gas accounting, Gregory Duveiller from the Max Planck Institute for Biogeochemistry described the challenges in improving the estimates of the land sink associated with the variation of gross primary productivity (GPP), which is one of the most uncertain parts of the Earth’s carbon budget. His work is taken in supporting the work of the RECCAP-2 initiative within the Global Carbon Project, which aims to establish the greenhouse gas (GHG) budgets of large regions, and which need for that a proxy for biogenic CO2 fluxes from the land. While measuring photosynthesis directly from space is not possible, a promising proxy comes in the form of Sun-induced Chlorophyll Fluorescence (SIF), a signal emitted by vegetation that is related to GPP and that can be retrieved from remote sensing. However, this signal is very weak, it comes from very coarse pixels, and its relationship with GPP is not simple, as it is not linear and changes with stress, making it even more complex to reliably use it as a proxy for vegetation productivity.

Gregory Duveiller: Towards a better estimation of GPP for greenhouse gas accounting.

Within the OEMC project, Duveiller and his team at MPI are developing an EO spatial downscaling framework to improve carbon flux estimations using SIF. It combines process-based models with knowledge-guided AI with an objective to develop SIF-based 1km spatial resolution GPP flux estimations based on measurements from the TROPOMI sensor on Sentinel-5P (spatial resolution of about 5 km, which is too coarse for many applications). The MPI group recommends in general including machine learning as part of Earth System Science modeling through so-called “hybrid models” (Reichstein et al. (2019). Deep learning and process understanding for data-driven Earth system science. Nature, 566(7743), 195–204). So in summary: modeling carbon fluxes is complex but EO-based images such as TROPOMI sensor on Sentinel-5P can be (carefully) downscaled and combined with other EO data and derivatives to try to increase our capacity to evaluate land carbon fluxes.

#10: There are still quite some challenges and complexities to be address before we have a true digital twin for the terrestrial water cycle

There’s a strong need to develop advanced tools and systems to address the major challenges facing our society, including global change and the increasing occurrence of extreme events. Stakeholders and policy-makers involved in flood risk reduction and water resources management need data with high temporal (< daily) and spatial (< 1 km) resolution to make decisions that address these objectives. This requires: (1) high-resolution observations (satellite, in situ, drones, citizen science) and model simulations, and (2) improvements in the way models and observations are integrated, including the use of machine learning techniques. In this context, Luca Brocca from National Research Council in Italy presented the Digital Twin Earth (DTE) Hydrology activities during the Global Workshops Building a Digital Twin Earth for the Water Cycle State of the Art and Challenges, highlighting 3 important challenges to be addressed for future activities:

  1. Correctly representing spatial-temporal resolution vs sampling,
  2. Distinguishing between consistency vs independence of EO and model data, and
  3. Incorporating complexity of physical and human processes occurring at high-resolution.

In summary, it has been highlighted the relevance to understand that modeling the terrestrial water cycle at high-resolution is a very challenging task and we should be honest and clear on what can be achieved, and more important what cannot be achieved (at least in the near future).

Luca Brocca: “Building a Digital Twin Earth for the Water Cycle State of the Art and Challenges” final slide highlighting the 3 challenges to be addressed.
The final slide of Luca Brocca’s presentation at ESA’s Hydrospace conference including the 2 additional challenges.

Interestingly, the discussion with colleagues during the workshop has highlighted 2 additional important challenges that need to be addressed:

  1. choosing between modular vs centralized modeling approach, and
  2. choosing the right data infrastructure and front-end to maximize usability.

First steps toward establishing back-end/front-end have been carried out recently for enabling fast access to the hydrology datacube of DTE Hydrology project (ESDL platform), for developing the DTE Hydrology platform with specific tools for making the project results and for providing easy access to what-if scenarios for flood risk assessment and for water resources management.

In the recent paper published in Nature News “DeepMind AI accurately forecasts weather — on a desktop computer”, the Google DeepMind team has shown that accurate enough forecasting of weather can be even run on a desktop computer or even laptop. It appears that AI / deep learning algorithms can significantly simplify complex weather forecasting. But not everything can be reduced in complexity just by using deep learning. Breaking up the problem and looking at different aspects / solving component by component vs throwing more data into machine learning can be also rewarding.

#11: Federated system philosophy and infrastructures help reduce fragmentation and unnecessary competition

Mastodon is an open source-based social network where different groups have their own installations that seamlessly talk with each other. From a user perspective we see mastodon as one thing; behind the scenes there are many back-ends and also customization of front-ends. The reason why all users see only one system is because all back-ends are fully interoperable and one can always search content on all mastodon servers (https://mastodonservers.net/) + add own mastodon server (as long as you follow some minimum group rules). This is called a federated self-hosted system (or Fediverse) philosophy and comes with many interesting benefits: especially having a system that allows everyone to join and improve, and which is also much easier to crowd-fund.

Benjamin Schumacher (EODC) has reviewed issues connected with setting up (first a road-map, then implementation) of the so-called “Green Deal Data Space” (GDDS). Most importantly, there is now a foundation called “Green Deal Data Space foundation” (Horizon Europe project called “GREAT”) and it will be producing various solutions under four key pillars: (1) architecture, (2) datasets, (3) governance and (4) community. But what to expect in the near future? How will the GDDS compare and work with the GTIF and Copernicus programme? In the end, how do these EU-initiatives compare to / connect with the initiatives and programmes on other continents (USA, Japan, China etc) for example the EarthCube.org and PanGeo?

Benjamin Schumacher / Christian Briese: European platforms to support the EU Green Deal.

Some of the key changes EODC seems to be promoting vs some past EO processing systems include:

  1. Better integration through a so-called “federated approach” (STAC registry based on the same metadata; the same standards but then community of independent organizations and back-end solutions) vs fragmentation, redundancy and lacking coordination among EO platform providers;
  2. Dynamic resource allocation and scaling vs old Virtual Machine model;
  3. Pixel-level flexibility and cloud native data structures (COGs) vs file-based storage and data access;
  4. Appealing business models with long term perspectives;

The GREAT project will be running for the next 2–3 years and promoting similar values. We hope that the project name will also be its destiny (no pressure).

#12: Copernicus Dataspace Ecosystem — get used to it as it will be your main access point to Copernicus data for years to come

The Copernicus Dataspace Ecosystem (CDSE) is a new infrastructure funded by ESA and the European Commission under the Copernicus programme. It has been announced as the “central place for public EO data gathering the leading European cloud and earth observation service providers: CloudFerro, Vito, DLR, Sinergise…”. It comes with a (Copernicus) browser and well-documented API + integrated openEO, JupiterLab and similar.

Jędrzej Bojanowski: Copernicus data space ecosystem.

The CDSE business model combines a free tier based on Jupyter Hubs for general uses with commercial services which will be user-focused. The main difference between CDSE and other cloud providers such as Google Earth Engine and Microsoft Planetary Computer is its inbuilt strategy to allow value-added service companies to use the platform. Thus, it is expected that CDSE will encourage European SMEs to work on rapid innovation cycles and thus make a relevant difference to the user community.

Although there are still many uncertainties and questions about what on CDSE is “for free” and what comes at a cost and what you can and can’t implement, have no doubt that this is among the most ambitious Earth Observation hosting and processing projects funded by the European Commission.

In the meantime Radeloff et al. (2023) (the Landsat science team) have proposed 13 essential and many more desirable/ aspirational products using medium resolution imagery referred to as “Medium-resolution satellite image-based products that meet the identified information needs for sustainable management, societal benefits, and global change challenges”. The desirable products include: maps of crop types, irrigated fields, land abandonment, forest loss agents, LAI/FAPAR, green vegetation cover fraction, emissivity, ice sheet velocity, surface water quality and evaporative stress. The aspirational land monitoring products include: forest types, and tree species, urban structure, forest recovery, crop yields, forest biomass, habitat heterogeneity and winter habitat indices, net radiation, snow and ice sheet surface melt, ice sheet and glacier melt ponds, sea ice motion and evaporation and transpiration. All these can probably already be mapped and assessed using the existing public EO data sources (Landsat, Copernicus Sentinel and similar), but we just need to work hard (and that is jointly with user communities) to develop the best modeling and analysis path leading to highest accuracy, reliability and usability of future products.

The next Open-Earth-Monitor Global Workshop 2024 will be hosted at the International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria in the period 30 September to 4 October 2024. To subscribe for updates see below.

The Open-Earth-Monitor Cyberinfrastructure project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement №101059548. To subscribe to regular updates in connection to the Open-Earth-Monitor project please use our Mastodon channel (preferred) at https://fosstodon.org/@opengeohub and/or subscribe to our internal newsletter via: https://earthmonitor.org/contact/.

--

--

OpenGeoHub
Nerd For Tech

Not-for-profit research foundation that promotes open geographical and geo-scientific data and develops open source software.