NEWA Meso-Micro Challenge Phase 2: Complex Terrain

Test your wind assessment methodologies at the NEWA experimental sites in complex terrain: Rödeser Berg, Hornamossen, Alaiz and Perdigão.

Javier Sanz Rodrigo
The Wind Vane
7 min readOct 16, 2018

--

Visualization of velocity contours for the flow approaching the Alaiz site from the north. Simulation done by Roberto Chávez Arroyo with CFDWind
Presentation of the NEWA Meso-Micro Challenge Phase 2 based on forthcoming ALEX17 benchmarks

Scope

The second phase of the “NEWA Meso-Micro Challenge for Wind Resource Assessment” will extend the assessment to heterogeneous terrain sites participating in the NEWA experimental campaigns, notably:

  • Rödeser Berg, relatively isolated forested hill
  • Hornamossen, moderately complex forested terrain
  • Perdigão, double-hill covered with short canopy
  • Alaiz, hill-valley-mountain

For all of these sites, data from a reference mast is provided, as in conventional wind resource assessment campaigns, to calibrate microscale models. Then, the assessment will be based on cross-validation, i.e. predicting the other measurement sites in each experiment (the target sites).

As in the first phase, the results of this benchmark will be anonymous if need be, both in terms of the name of the participant and the name of the model. Of course, it is expected that a subset of modelers will decide to publish their results with a detailed intercomparison in conferences and scientific journals.

Data Accessibility

Data from the experiments will be made available through the NEWA data server, hosted by DTU, by the end of the project in April 2019. Observational input data will become available over the next few months when they are available for public release. In the meantime, blind benchmarks on specific flow cases are also being launched to evaluate model performance before applying wind resource methodologies.

Objectives

The objectives of the Meso-Micro challenge apply in this second phase as follows:

  • Test meso-micro methodologies consistently for a range of wind climates and terrain conditions and map accuracy vs cost for relevant quantities of interest, notably for annual energy production (AEP) and other siting parameters (turbulence intensity, wind shear, etc).
  • Establish open-access practices for the assessment of these methodologies that can determine the added value of meso-micro with respect to conventional methods based on microscale modeling only.
  • Discuss methodologies for uncertainty estimation of gross AEP and discuss their adequacy to the wind resource assessment process used by industry.
    Input data.

Input data

Observations

One year of observations from the reference masts are provided in NetCDF format. A reference period of one year is considered for each site in order to cover seasonal wind variability. For data quality purposes and to facilitate the participation of high-fidelity models, the assessment will be focused on prevailing wind direction sectors, free of wake effects or mast distortion, although results for the other sectors can also be included for completeness.

All these reference input data is provided within a dedicated document for each site.

Elevation and Land Cover

High-resolution maps of elevation and canopy characteristics from aerial lidar scans are provided in NetCDF format for each site.

Mesoscale Forcing

If you only plan to run microscale simulations based on mesoscale input data, for consistency purposes, we would like you to use the mesoscale input forcing provided, which is based on the WRF configuration that has been used to generate the European Wind Atlas.

Mesoscale input forcing in terms of mesoscale tendencies is provided for all the sites following the same methodology of the GABSL3 benchmark case and described in [1].

A NetCDF file for mesoscale data is provided with the following information:

  • Site coordinates and Coriolis parameter (fc).
  • Time-height 2D arrays of velocity components (U,V,W) and potential temperature (Th).
  • Time-height 2D arrays of mesoscale forcings (tendencies): geostrophic wind (Ug, Vg), advective wind (Uadv, Vadv) and advective potential temperature (Thadv).
  • Time array of surface-layer quantities: friction velocity (ust), kinematic heat flux (wt), 2-m temperature (T2), skin temperature (TSK), surface pressure (Psfc).

Units, dimensions and variables description are all provided in the NetCDF file. Momentum tendencies are provided in [m s-1] and should be multiplied by the Coriolis parameter to obtain appropriate forces in [m s-2]. For convenience, we have omitted information about humidity since the assumption of dry-atmosphere is typically adopted by wind energy flow models.

Reference Power Curve

The NREL 5 MW reference power curve will be used to evaluate AEP [2]. This will be computed in the evaluation process based on the wind speed distributions.

Validation Data

Cross-validation will be carried out using the wind characteristics of the reference site for normalization purposes. The following quantities of interest will be evaluated:

  • Horizontal wind speed (S) relative to the reference site (S/S0).
  • Gross AEP based on the reference power curve.
  • Turbulence intensity based on the turbulent kinetic energy (tke).
  • Wind shear exponent (α).

Validation results will be visualized using, for instance:

  • Distributions binned by wind direction and stability.
  • Scattered plots (observed vs predicted).
  • Vertical profiles and longitudinal transects.

Stability classification will be based on the z/L parameter, where L is the Obukhov length, measured in the surface-layer (in the 10 m range) at the reference site. This is general convention but other methods based on temperature gradient (Ri or Fr) could be tested as well. For guidance, stability classes are defined as follows:

  • Unstable (u): -0.6 < z/L < -0.2
  • Neutral (n): -0.02 < z/L < 0.02
  • Stable (s): 0.2 < z/L < 0.6

Wind direction will be binned using 30º intervals (12 sectors). You can provide full 360º distributions or focus on the reference sectors defined in each benchmark.

Model Runs

Consistent with the philosophy of the challenge, each participant should develop a plan to span the accuracy vs cost figure. For instance:

  • A WRF modeler could run yearly simulations starting from the 3-nest configuration of the reference set-up (or a different one) and add other 3 nests switching to LES down to resolutions of the order of 100 m and provide 6 results, one from each nest, for the 3 sites.
  • A CFD modeler may vary the number of simulations in terms of e.g. resolutions or boundary conditions included in the assessment.

Calibration

Adopting the end-user perspective, the simulations should consider how to best use onsite measurements to calibrate the model-chain to the reference mast. This is equivalent to the conventional micrositing process in the design phase of ensuring that self-prediction at the reference site is free of bias before extrapolating horizontally or vertically to other target prediction sites.

Parameterization of RANS models

For consistency with the GABLS3 benchmark, microscale models using Sogachev et al. (2012) [3] k-ε turbulence model shall use this set of constants: κ = 0.4, Cε1 = 1.52, Cε2 = 1.833, σk = 2.95, σε = 2.95 and Cμ = 0.03 [3].

Output Data

The ultimate goal is to evaluate annual quantities, as described in the validation section, derived by bin-averaging from model simulations from relevant wind direction sectors and stability classes. To homogenize the output data please consider these indications:

  • One NetCDF file per site and per simulation run including all the validation positions grouped by “profiles”.
  • Each profile is defined in terms of 3D tables, where the third index denotes the stability class (1 = ‘u’, 2 = ’n’, 3 = ‘s’), the first index refers to a point in the profile (identified by an ID number) and the second index provides the value for the following bin-averaged quantities. Hence, for each stability class, the following 2D tables will be generated:

where x,y,z are coordinates of a sensor location in the microscale map, with z being the height above ground level, f being the frequency of the corresponding bin, and the subscripts denote the wind direction sector number.

  • “Profiles” are typically vertical profiles at target mast locations (fixed x and y, varying z), longitudinal transects (fixed z, varying x and y) or, simply, a point cloud associated to target evaluation locations. In any case, you are supposed to interpolate your results to each x,y,z coordinates so that the dimensions of the tables are consistent for all participants. A list of profiles is defined for each site and provided as part of the input data files.

A python script is provided to support you writing your data to the correct NetCDF format. Please respect the naming convention for variables to allow automatic post-processing.

Together with the output data you should provide a summary of your model set-up and methodology to extract the bin-averaged output quantities.

Remarks

You may want to read about these previous and ongoing benchmarking activities:

References

[1] Sanz Rodrigo J, Churchfield M, Kosović B (2017) A methodology for the design and testing of atmospheric boundary layer models for wind energy applications. Wind Energ. Sci. 2: 1–20, doi:10.5194/wes-2–1–2017

[2] Jonkman J, Butterfield S, Musial W and Scott G 2009 Definition of a 5-MW Reference Wind Turbine for Offshore System Development. Technical Report NREL/TP-500–38060, February 2009, available online

[3] Sogachev A., Kelly M., Leclerc M. Y. (2012) Consistent Two-Equation Closure Modelling for Atmospheric Research: Buoyancy and Vegetation Implementations, Bound.-Lay. Meteorol. 145, 307–327, doi:10.1007/s10546–012–9726–5

Timeline

  • First results for Rodeser Berg with a publication of the evaluation methodology is planned to be presented to the WindEurope 2019 conference in April 2019. Therefore, you are expected to submit your results by the end of January.
  • The other sites will be added in 2019 as the blind benchmarks for flow models are concluded. This activity will continue through 2019–20 as part of the IEA-Wind Task 31 Phase 3, with the objective of performing a multi-site evaluation, i.e. combining results from various sites for gap analysis and uncertainty quantification.

Acknowledgements

This benchmark was launched with the support from NEWA (FP7-ENERGY.2013.10.1.2, European Commission’s grant agreement number 618122). The benchmark is also part of the International Energy Agency IEA-Wind Task 31 “Wakebench”.

--

--

Javier Sanz Rodrigo
The Wind Vane

Senior Data Scientist at the Digital Ventures Lab of Siemens Gamesa Renewable Energy.