Team meeting in Montreal

Lindsey Heagy
simpeg
Published in
9 min readJul 22, 2019

July 17, 2019

SimPEG team in Montreal!

Last week, the International Union of Geodesy and Geophysics conference was held in Montreal and the SimPEG team took the opportunity to hold a 1-day meeting for those of us attending. Six of us (Dom Fournier, Doug Oldenburg, Joe Capriotti, Seogi Kang, Thibaut Astic and myself) got together to discuss a roadmap for working towards SimPEG 1.0! We broadly covered 4 topics:

  • Simulation refactor: Refactoring the Problem and Survey classes in SimPEG to a Simulation class and a lighter-weight Survey class
  • Sources and receivers: Thinking through the structure of the source and receiver classes to reduce redundancy of the storage of projection matrices and improve integration between the natural source and frequency domain electromagnetic codes
  • Parallelization and efficiency: parallelizing the inversion using decomposition approaches, parallelizing forward simulations, identifying places to contribute to the scientific python stack
  • Goals for SimPEG 1.0: development items, refactoring, and community practices to take SimPEG to a 1.0 release 🚀

Simulation Refactor

The first topic of the day was about the ongoing simulation-refactor within SimPEG (under construction here: #786, original issue here: #562). In a nutshell, the structure of the current Problem and Survey classes is challenging to explain, does not easily allow other forward modelling codes to be plugged in to SimPEG, and can not be serialized, meaning it is not easy to parallelize the current codes. These symptoms indicate that we need to re-think the structure.

The highlights…

The new Survey will be much more lightweight. As before, it will contain a source_list , where each source has a receiver_list of receivers that are listening to it (but now with PEP8 names!), and some utilities, for example the ability to fetch sources by frequency. The Survey previously had the dpred method, meaning it would compute data. This is no longer the case, the Simulation will now do all of the heavy lifting here. The Survey also used to keep track of dobs (the observed data), as well as data uncertainties. This will now be handled by the Data class, which will take a survey as an input and treat it as a property of the data. Similarly, a survey will be an input to the Simulation class to provide the information, rather than having to pair them (this was a source of problems for serializing the class).

This also propagates changes to the data_misfit module. Previously, a data misfit would have been instantiated with only a survey (which both stored dobs and could compute dpred ), now, the data misfit will be instantiated with data and a simulation. For example, in the current framework, to set up a data-misfit term for a frequency domain electromagnetic simulation, it would look something like…

# set up a simulation
survey = FDEM.Survey(source_list)
problem = FDEM.Problem3D_e(mesh, sigmaMap=mapping)
problem.pair(survey)
# compute predicted data
dobs = survey.dpred(model)
# set the data and uncertainties
survey.dobs = dobs
survey.std = 0.05
# create a data misfit
dmis = DataMisfit.l2_DataMisfit(survey)

and now…

# set up a simulation 
survey = fdem.Survey(source_list)
simulation = fdem.Simulation3DE(
mesh, sigma_map=mapping, survey=survey
)
# compute predicted data
dobs = simulation.dpred(model)
# set the data and uncertainties
data = Data(dobs=dobs, standard_deviation=0.05)
# create a data misfit
dmis = data_misfit.L2DataMisfit(data=data, simulation=simulation)

Interfacing to other simulations

This should simplify our ability to interface to other forward simulation packages (e.g. empymod!). In order to interface to SimPEG to invert data using your forward simulation, you just need a light-weight wrapper class that includes:

  • dpred : a method for computing predicted data
  • Jvec : a method for multiplying the sensitivity times a vector
  • Jtvec : a method for multiplying the adjoint-sensitivity by a vector

Note that the sensitivity calculations could be derived or estimated using a finite difference approach. Watch for an example with empymod in the near future!

The updated framework

On the board, we sketched out what the updated framework looks like with the simulation-refactor (the right half of the image) . A couple things to note:

  • data is now explicitly included in the framework
  • data_misfit now takes a simulation and a data object

We also spent some time talking about how we might remove the InverseProblem class and simply provide a statement of the objective function to the Inversion directly (this is the diagram on the left). This would be a separate pull request from the simulation but is on the roadmap for SimPEG 1.0!

Survey Class: Sources and Receivers

We topped up our coffees and moved on to the next discussion: the structure of sources and receivers. In particular, we are running into issues of duplication of information. For example, right now in a frequency domain EM simulation, if you want to measure real and imaginary components of the x-oriented magnetic field (bx), then both the real and the imaginary component would be different receivers, let’s call them bxr and bxi.

bxr = FDEM.Rx.Point_b(locs, orientation='x', component='re')
bxi = FDEM.Rx.Point_b(locs, orientation='x', component='im')

What currently happens is that each of bxr and bxi creates and stores its own projection matrix which interpolates the magnetic flux computed on the simulation grid to the receiver locations. If bxr and bxi are at the same locations, then we have duplicated our efforts. This can start to become costly…

Similarly, for the integral-equation potential fields codes, the system matrix is constructed based on the unique receiver locations, so we need the ability to easily fetch all of the unique receiver locations (without trying to do a step where we compare numpy arrays of floats!)

Sketch of the base receiver classes

If instead of each receiver having only a single component, what if we considered an instance of a receiver to be a class to be something that shares a single projection matrix, so

bx = fdem.receiver.PointB(
locs, orientation='x', components=['re', 'im']
)

To compute these data, we need a projection matrix that interpolates from x-faces of the mesh to the locations (locs), and the same matrix can be used for the real and imaginary components. More complex data, for magnetic flux at an arbitrary orientation could be though of as a composite receiver which contains bx , by , bz receivers. Each of those can interpolate values from the x, y, and z grids, respectively, and then the composite receiver sums them up with the appropriate weights based on the orientation of the receiver.

A similar thought process can be thought of for DC resistivity. For 3D surveys that are being conducted by companies such as Dias, an array of pole receivers are laid out and then dipole are made by defining pairs of electrodes from this set and subtracting the measured pole-values. This means that we can construct a single projection matrix to go from cell-centers (or nodes depending on the discretization) which gives us the pole measurements and then simply track which values need to be subtracted from each other to create the dipole data. The whiteboard sketch below gives an idea of how a Dipole receiver would be implemented.

To make the extension to magnetotelluric data, we can again use the idea of a composite receiver, but instead of just listening to a single field (e.g. magnetic flux b), the transfer function receiver would take multiple fields (electric and magnetic) and compute the appropriate impedances.

In order to improve the integration between the frequency domain EM codes and the natural source EM codes, we can think about adapting the idea of composite receivers instead to sources so that a plane-wave source is simply a list of two plane-wave sources with orthogonal directions.

To simplify things for the user, we will need to have a utility or higher-level API, but this gives the general idea at the base. Watch for issues on SimPEG that flesh out these ideas more soon!

Scaling SimPEG: Parallelization and Efficiency

After breaking for lunch, we came back and started discussion on parallelization and efficiency improvements we can make within SimPEG. With respect to parallelization, there are two avenues we will work on pursuing: (1) parallelization at the level of the objective function, (2) parallelization within a forward simulation. For (1), this is appropriate for mesh-decomposition approaches, such as that being pursued by Dom in the potential fields codes. The inversion model lives on a global mesh, and the forward simulation is broken up into multiple forward simulations, so that only one region of the mesh is refined and the rest is coarsened for each sub-problem. These simulations are run independently, and each has its own data associated misfit term, so the parallelization occurs at the level of the objective function. A similar approach is highly appropriate for airborne time-domain electromagnetics. For (2), problems like frequency domain electromagnetics are embarrassingly parallel, so we just just need to implement it! We will use magnetics and magnetotellurics as our use-cases for driving the initial implementation with dask.

In particular for the mesh decomposition approaches, there will be lots of details to sort out with respect to how to best optimize mesh sizes, and how to appropriately chunk and store sensitivities in the integral-equation implementations for gravity and magnetics. If you get excited about these problems — get in touch!

We also discussed some general areas for efficiency improvements, including experimenting with different solvers and identifying areas lower in the scientific python stack where we might be able to contribute improvements. Benchmarking and profiling are on the radar, but these will be performed once we have completed the major refactors within SimPEG.

SimPEG 1.0 Goals

The final topic of the day was to outline a roadmap for SimPEG 1.0. We discussed technical aspects including: the simulation refactor discussed above, removing support for 2.7!, general housekeeping (utils need some 💚), serialization of inversion components (optimization, inversion and directives), and importing simpeg rather than SimPEG.

We also discussed modes of interacting with SimPEG. Right now, we collaborate with many users by providing scripts which drive the simulation or inversion, and input variables are provided in an input file by the user (having an input file to a simulation or an inversion is the common mode of operation for most inversion-code users). The challenge is, that any time we upgrade the code, the script generally needs to be updated. Thus, there is a need to formalize this process a bit. This led us to discuss the idea of “wrappers” or “drivers” which are a higher-level API that provide sensible defaults for many of the forward simulations and inversions (e.g. generate a mesh based on the survey, set defaults for the weights in the regularization, …). It would also be nice if there were a command-line interface to SimPEG, e.g.

> simpeg simulate my_grav_simulation.yml
> simpeg invert my_grav_inversion.yml

Another topic of discussion is how best to interface to the community, provide support, answer questions and communicate updates. Right now, slack and google-groups are our main mode of operation. The plan is to stick with slack for development conversations and try using discourse (e.g. see the Jupyter discourse) for community conversations, questions etc. We would appreciate any feedback you have on how you would like to engage with members of the SimPEG community!

In the coming weeks, we will be pushing forward on the simulation refactor and summarizing many of our discussion points in issues on the SimPEG repository. Watch for new issues, a milestone, and join the development — there is lots to do ⚒!

Thanks!

Thanks to UBC Geophysical Inversion Facility, the Center Gravity, Electrical & Magnetic Studies (CGEM) at Colorado School of Mines, Environmental Geophysics at Stanford, and Fernando Pérez’s lab at UC Berkeley for travel and conference funding.

Thanks to Plotly (@plotlygraphs) for helping arrange a beautiful conference room space for us at the Anomaly co-working space in Montreal!

--

--