Seismic Processing: The Earth-Shattering Truth

Altynanke
Data Analysis Center
9 min readOct 23, 2023

--

A good seismic processing is like a four-leaf clover, hard to find and lucky to have. Everybody involved in oil and gas knows that the processing stage is weighty and complex, yet not many get the picture. Our team fully covers some of the main processing steps, and today we will show you the ropes. So let’s get down to it!

Why do we need seismic processing?

The processing stage lies between seismic data acquisition and interpretation, as raw acquired data is unsuited for interpretation analysis. It usually requires a lot of meticulous labor and can linger for months. Thankfully, some of the stages can already be enchanced with ML, others are bent to remain algorithmic, and the rest are yet to be studied.

Let’s start with the data description and our main goal. At the processing stage, we work with a pre-stack seismic cube, it is like a post-stack, but with a little bit more traces in it.

Every pre-stack seismic cube is unique not only in location but also in survey design, which is in the hands of the seismic acquisition team. In short, the objective of seismic acquisition is to emit and record waves at various locations around each inspected point to get complete information about it. As a result, there is much more than one trace recorded to describe each point of the survey. Seismic waves are generated by sources (also called shots) and get recorded by receivers that can be of various types, spacing, quantity, sampling rate, record length, etc. The two main types of sources for field data are explosives and vibrators, and for marine fields, it’s air guns. Positions of sources and receivers are also a big deal. Usually, they are settled in lines, with sources perpendicular to receivers. See the single-shot acquisition schema in Figure 1.

Figure 1: Seismic aquisition. Source: https://www.scisnack.com/2019/10/23/what-seismic-data-tell-us-about-rocks-below-our-feet/

We will illustrate our narrative with the examples made on open Geofizyka Torun’s survey. It’s a 16 km long vibroseis 2D line with 251 sources, 781 receivers, and 70531 traces. Here on the left side of the Figure 2, you can see its map of all sources (red stars) and a set of receivers (blue dots) activated for a particular shot (black cross). The picture on the right displays this shot gather.

Figure 2: Geometry of Geofizyka Torun’s survey. Red stars denote shots, blue dots denote receivers activated for a particular shot(black cross)

If we index survey traces on the common midpoint (CMP) and sort by offset, we get gathers like the one on Figure 3:

Figure 3: CMP gather. Blue dots denote start of the useful signal.

Remark: As you may have already guessed, every seismic processing step is bounded by field geometry. If the shots and receivers coordinates are greatly miscalculated, we cannot get, for instance, proper CMP gathers. Luckily, large geometry errors are singular and we have a quality control method for them, which we’ll discuss later in this post.

To get this survey line stacked, one has to size up CMP gathers down to one trace. We cannot sum or average them right away as the horizons aren’t aligned and too much noise is present. So that is literally why we need a seismic processing stage: to perform a sequence of certain operations beneficial for the final stack.

Normal Moveout Correction

We can notice in the CMP gathers that the useful signal starts later at further offsets, see Figure 3. Under the assumption that reflection traveltimes follow hyperbolic trajectories as a function of offset, we can fit these hyperbolic hodographs coefficients and straighten them up on the seismogram such that the offset effect is removed, see Figure 4. These coefficients represent some kind of “mean” velocities from the surface up to the given reflection boundary, they increase with depth and vary between gathers.

Figure 4: from left to right: original CMP supergather, its NMO-corrected version, and the semblance colormap

This process of flattening the hyperbolic events is called normal moveout correction (NMO) and is essential for seismic processing. So how do we manage the velocities?

For each of the reflectors, we can manually choose the velocity of the signal reflecting from it so that it fits its hyperbola on the seismogram, see Figure 5.

Figure 5: Demonstration of NMO correction with constant velocity picked for single hodograph (blue)

As we go deeper, we have to meticulously correct our velocities, as velocity increases with depth and actually may surprise us depending on the earth’s structure. Usually, velocity picking at this stage is done manually via narrowing one’s lids while looking at several random gathers and hoping they would extrapolate to the whole survey.

To automate and correct this process, we utilize the following algorithm:

  • Firstly, we define a time grid covering the full-time range of the survey, and for each time we define a grid of possible velocities. We also have to choose some metric to evaluate the coherency of flattened hodographs, such as semblance.
  • Secondly, for each gather we precompute semblance according to the defined grid.
  • Next, for each gather we construct a directed acyclic graph so that:
    1. Each (time, velocity) pair defines a node of the graph. An edge between nodes A and B exists if and only if time and velocity at node B are greater than at A.
    2. Edge weight is defined as a sum of semblance values along its path.

Then a path with maximal velocity spectrum sum along it between any of the starting and ending nodes is found using the Dijkstra algorithm and is considered to be the required stacking velocity.

To reduce the amount of noise it is also beneficial to aggregate several close CMP gathers into supergathers and perform the algorithm on them.

In the Figure 4 on the left, you can see the original CMP supergather, its NMO-corrected version in the center, and the semblance colormap on the right with picked velocities for each time grid (yellow).

Now we can try to take the mean along the time axis of these NMO-corrected gathers and see our NMO stack in Figure 6:

Figure 6: NMO stack

Well, that resembles earth’s underlying structure, however, the horizons seem to be a bit deformed: we can see abrupt breaches of linear continuity in various places. Can we do better?

Static correction

Let’s look closer at the interior of the earth. The first layer, or weathering, is never flat. Surveys might contain rivers, mountains, hills, or other god knows what, so sources and receivers are located at different heights above sea level. Yet we are trying to stack the flattest 3D cube without any mountains, and our traces don’t contain explicit information about the landscape. Moreover, elevation changes are often accompanied by sudden velocity changes. Naturally, we want to shift sources and receivers as if they were located on a flat plane with nearly constant velocity.

Then we have to understand how the traces should be modified corresponding to this kind of correction. Let us notice that the deeper the reflector is, the steeper the wave should travel to reflect from it. In general, we have more interest in deep horizons than shallow ones, as the latter are much harder to align on the stack. This means that we can utilize a vertical approximation of the raypaths in the upper layers. This leads us to a static trace correction: we need to shift traces for a vertical traveltime from the source and receiver to the reference datum and back. This traveltime depends on velocities and thicknesses of the layers above. See the schema of the static correction in the following Figure 7:

Figure 7: Static correction schema. Source: https://utheses.univie.ac.at/detail/3353#

So, how can we calculate these velocities and thicknesses? To do so, one might take into account that apart from reflected waves there are refracted ones. They travel faster than reflectors and appear after a certain offset. This indicates that after this offset the first useful signal is always coming from the refractors. In general, all prominent refractors on a seismogram can be counted with fingers on one hand, and their velocities normally change not so abruptly. That makes refractors great candidates for making our trace shift.

To introduce such a correction for one of the refractors, we must have the times of first breaks, the velocity of that refractor, and finally velocities and thicknesses of the layers above.

As for the first breaks, that is a great task for UNet. Check out our other blogpost about first break picking.

Also, we have recently switched to a brand new approach utilizing only one model and linear moveout correction. Subscribe in order not to miss the upcoming post about it!

The next step is to get refractor velocities. Having the first break times it’s not that hard: unlike in NMO, now we know exactly which hodographs to straighten up. Moreover, refractor hodographs are linear as a function of offset. As a result, we get fitted piece-linear regression, with coefficients corresponding to the velocities of refractors, see Figure 8:

Figure 8: Colormap with first refractor velocity and fitted refractor velocity model for one shot.

Now we utilize these velocities to fit a layered model to calculate layers’ thicknesses. The fitted model has information about the thickness and velocity of each layer (see Figure 9), which means we can use it to predict the first breaks (yes, again).

Figure 9: On the left side colormap of layer 1 thickness with black line denoting a profile on the right side. Profile shows layers’ margins and velocities according to the fitted layered model.

If we then compare these first breaks with the former ones from UNet, they should be nearly identical. However, for some shots we may encounter a “sinusoidal shift” of one picking compared to another: it delays at the receivers on one side and outruns on the other. The example is shown in a Figure 10 below: for traces on the left we observe a lag of the model picking (orange) compared to UNet (blue), and vice versa on the right. For this example we used an open Stratton 3D survey.

Figure 10: Colormap with Geometry error metric on the left and gather corresponding to red dot on the right withscattered first break picks.

Pattern like this may indicate that the factual location of the shot was different from the registered one. Thus we can identify geometry errors and recover the actual shot locations by utilizing the structure of the error.
Geometry errors are infrequent, nevertheless, their automatic detection is a killing feature, as conventionally geologists do it by scrutinizing every shot gathered, which takes weeks of laborious work.

Coming back to our pipeline, we can finally perform vertical statics correction, apply NMO, and get our new stack (Figure 11):

Figure 11: Stack with static correction applied.

Follow this link and compare these two stacks with a slider! Or just enjoy the GIF at Figure 12. Can you spot the difference? Write in the comments which one you like the most!

Figure 12: Stacks comparison. Left: NMO stack, right: stack with Statics correction applied.

Summary

Static correction is essential for seismic processing, and as you can see, it consists of many smaller steps. Even first break picking would take a geologist several weeks for a large survey. In contrast, our approach takes only a few hours to go through all the steps and get a stacked cube.

Of course, this is not nearly the end of the seismic processing pipeline. There are also essential migration, deconvolution and denoising steps such as multiples and groundroll attenuation.

For more information about noise attenuation on seismic data with ML please check out our previous articles here and there.

Most of these steps are being done with nearly retired software, either with a lot of interpolations or an arduous manual effort. Augmentation of the geology field with computer science methods has great potential to speed up and enhance the quality of all seismic exploration steps, and for now, we are at the very beginning of this route.

--

--