How to despeckle a SAR image based on Total Variation?

Make a cleaner SAR image based on TV regularization.

Jian Kang
8 min readFeb 26, 2023

Due to the coherent processing of Synthetic Aperture Radar (SAR) image, it is often characterized by strong fluctuations in pixel amplitude, which leads to speckle effect. Within one SAR pixel, there are often plenty of elementary scatterers as illustrated below:

Within one SAR pixel, the transmitted electromagnetic waves interact with many elementary scatterers.
Within one SAR pixel, the transmitted electromagnetic waves interact with many elementary scatterers, and the reflected waves are the coherent sum of them.

Such scattering phenomenon make the phases of these small scatterers interfere with one another in a destructive or constructive manner, which leads to the speckle effect. Here is one SAR image:

The acquired SAR image (Pudong Airport, Shanghai, China, Sentinel-1 image)

It can be observed from the image that the speckle effect makes the amplitudes fluctuate strongly. How to mitigate such effect and make the image look cleaner? Below I will introduce an optimization-based method to achieve this.

Total variation denoising

One of the most commonly exploited denoising method for natural images is based on total variation regularization¹. It tries to regularize the observed images with the prior knowledge of piecewise smoothness. An anisotropic version of TV often used is defined as:

Anisotropic TV norm

where A denotes a matrix, and the anisotropic TV calculates the absolute sum of all the differences between each two successive rows and columns.

SAR pixel statistics and optimization model for despeckling

The above TV norm is served as a prior term which constrains some underlying properties for the image we want to optimize. Besides it, a likelihood term is also needed, which characterizes how likely can the observed image be obtained given the underlying clean image. Then, based on these two terms, we can formulate a Maximum A Posteriori (MAP) estimation and further transform it as a minimization problem to achieve speckle reduction:

TV regularized image restoration model

where -log(p()) denotes the negative log-likelihood, G is the observed image, Z is the underlying clean image that we want to recover, and \lambda denotes the penalty parameter which balances the two terms.

Next, we have to determine the log-likelihood term. According to Goodman’s model², the observed intensities of SAR images follow a gamma distribution given by:

Gamma distribution of SAR intensities

where I denotes the intensity value in one pixel, R is the underlying reflectivity to be recovered, L is the number of looks and $\Gamma(\cdot)$ represents the Gamma function. Based on the above model, the speckle noise n is multiplicative, and I can be decomposed as follows, along with their expectation and variation:

Multiplicative noise model for SAR intensities

With such multiplicative noise model, it is hard to directly restore the clean SAR image. To solve this problem, one approach to transfer it into an additive one is based on taking logarithms³. By letting g=\log(I) and z=\log(R), we can obtain:

SAR intensity statistics in the logarithm domain

Without loss of generality, we let L=1, which indicates the single look complex (SLC) SAR image. According to the above-mentioned log-likelihood, we then have:

The log-likelihood term for SAR image in the logarithm domain

where C is a constant value. To this end, we have the following optimization problem for mitigating the speckle noise:

The optimization model for despeckling SAR images through TV regularization in the logarithm domain

Variable splitting and alternating direction method of multipliers (ADMM)

It is not easy to directly solve the above unconstrained optimization problem. One common way is to transfer it into a constrained optimization problem and then using ADMM⁴ to achieve the solution. First, we introduce two new variables, D_2, X and Y, which are represented by⁵:

Then, we can get the following constrained optimization problem:

The associated augmented Lagrangian is:

where T_1, and T_2 are the introduced Lagrandian multipliers and \mu is the associated penalty parameter. By using ADMM, we first derive the solutions for each subproblem:

It can be solved by using Newton’s method:

For the subproblem of X:

The minimization can be achieved by setting its gradient as 0, and we obtain the following linear system:

Since D_2 is a Toeplitz matrix⁶, it can be diagonalized in the Fourier domain and we obtain⁵:

Thus, the solution can be easily got in the Fourier domain:

For the subproblem of Y:

It can be solved by using element-wise soft-thresholding⁷.

At last, we update the Lagrandian multipliers by:

Based on the solutions of the subproblems, we make iterations, and the final solution is achieved after the algorithm converges and transferred back through exponential function to the original domain.

Method implementation

Below is the Python implementation:

import numpy as np

from scipy.io import loadmat, savemat
from scipy.special import polygamma, psi
from scipy.linalg import svd, norm
from sklearn.metrics import mean_squared_error
from scipy.optimize import newton
from scipy.ndimage import convolve1d

def log_int_std(L):
return np.sqrt(polygamma(1, L))

def prox_fisher_tippett(x, z, t, mu, L, I, K=6):
"""
function for solving Z subproblem
"""
for k in range(K):
ediff = I * np.exp(-x)
x = x - (mu * (x - z + t / mu) + L * (1 - ediff)) / abs(mu + L * ediff)
return x

def forward(arr, ax):
"""forward difference """
return convolve1d(arr, np.array([1, -1]), axis=ax, mode='wrap')

def backward(arr, ax):
"""backward difference"""
return convolve1d(arr, np.array([-1, 1]), axis=ax, origin=-1, mode='wrap')

def divergence(arr, axes=None):
""" Divergence of arr """

if axes is None:
axes = range(arr.ndim)
return sum(backward(db, ax) for db, ax in zip(arr, axes))

# https://github.com/aboucaud/pypher/blob/master/pypher/pypher.py
def psf2otf(psf, shape):
"""
Convert point-spread function to optical transfer function.
Compute the Fast Fourier Transform (FFT) of the point-spread
function (PSF) array and creates the optical transfer function (OTF)
array that is not influenced by the PSF off-centering.
By default, the OTF array is the same size as the PSF array.
To ensure that the OTF is not altered due to PSF off-centering, PSF2OTF
post-pads the PSF array (down or to the right) with zeros to match
dimensions specified in OUTSIZE, then circularly shifts the values of
the PSF array up (or to the left) until the central pixel reaches (1,1)
position.
Parameters
----------
psf : `numpy.ndarray`
PSF array
shape : int
Output shape of the OTF array
Returns
-------
otf : `numpy.ndarray`
OTF array
Notes
-----
Adapted from MATLAB psf2otf function
"""
if np.all(psf == 0):
return np.zeros_like(psf)

inshape = psf.shape
# Pad the PSF to outsize
psf = np.pad(psf,
tuple(zip((0,)*psf.ndim, tuple(W-w for w, W in zip(psf.shape, shape)))),
constant_values=0)

# Circularly shift OTF so that the 'center' of the PSF is
# [0,0] element of the array
for axis, axis_size in enumerate(inshape):
psf = np.roll(psf, -int(axis_size / 2), axis=axis)

# Compute the OTF
otf = np.fft.fftn(psf)

# Estimate the rough number of operations involved in the FFT
# and discard the PSF imaginary part if within roundoff error
# roundoff error = machine epsilon = sys.float_info.epsilon
# or np.finfo().eps
n_ops = np.sum(psf.size * np.log2(psf.shape))
otf = np.real_if_close(otf, tol=n_ops)

return otf

def soft_thresh(arr, thresh):
""" soft thresholding function """
shrink = np.abs(arr) - thresh
shrink[shrink < 0] = 0
return np.sign(arr) * shrink

def TV_ADMM(I, L=1, alpha=1, max_iter=30, opt_tol=1e-4):
"""
TV regularization for SAR image denoising
Parameters
----------
I: SAR intensity image
L: number of look
alpha: TV penalty parameter
"""
sigma = log_int_std(L)
mu = 1 / (sigma**2)

g = np.log(I)
z = g.copy()

x = t1 = np.zeros_like(z)
y = t2 = np.zeros((g.ndim, *g.shape)).astype(g.dtype)

def diffT2(arr_t):
return divergence(arr_t)

def diff2(arr_t):
diff_arr = [forward(arr_t, ax) for ax in range(arr_t.ndim)]
return np.stack(diff_arr)

diff_k = np.array([+1, -1])
diff_x = diff_k.reshape((len(diff_k), 1))
diff_y = diff_k.reshape((1, len(diff_k)))
Eny_x = (np.abs(psf2otf(diff_x, g.shape))) ** 2
Eny_y = (np.abs(psf2otf(diff_y, g.shape))) ** 2
T_z = Eny_x + Eny_y

for _ in range(max_iter):

### z subproblem
zp = z.copy()
z = prox_fisher_tippett(z, x, t1, mu, L, I)

### x subproblem
xp = x.copy()
H_x = mu * z + mu * diffT2(y) - diffT2(t2) + t1
x = np.real(np.fft.ifftn(np.fft.fftn(H_x) / (mu + mu * T_z)))

### y subproblem
diff2_x = diff2(x)
y = soft_thresh(diff2_x + t2 / mu, alpha / mu)

### update multipliers
t1 = t1 + mu * (z - x)
t2 = t2 + mu * (diff2_x - y)

### calculate optimal condition
pres = norm(x - g) / (norm(g) + eps)
xres = norm(x - xp) / (norm(xp) + eps)

if max(pres, xres) < opt_tol:
break

return np.exp((x + z) / 2)

Despeckled results

(Left) Noisy SAR image⁸ (Fuji Mountain, Japan, Sentinel-1) (Middle) Despeckled image when \lambda=0.3, (Right) \lambda=0.5

From the above results, the piecewise smooth property can be observed within the despeckled images. As the penalty parameter \lambda increases, more emphasis is placed on the TV term which leads to a smoother result and may also loss some textures.

References

[1] Rudin, Leonid I., Stanley Osher, and Emad Fatemi. “Nonlinear total variation based noise removal algorithms.” Physica D: nonlinear phenomena 60.1–4 (1992): 259–268.
[2] Goodman, Joseph W. “Some fundamental properties of speckle.” JOSA 66.11 (1976): 1145–1150.
[3] Bioucas-Dias, José M., and Mário AT Figueiredo. “Multiplicative noise removal using variable splitting and constrained optimization.” IEEE Transactions on Image Processing 19.7 (2010): 1720–1730.
[4] Boyd, Stephen, et al. “Distributed optimization and statistical learning via the alternating direction method of multipliers.” Foundations and Trends® in Machine learning 3.1 (2011): 1–122.
[5] https://nikopj.github.io/blog/julia_tvd/
[6] https://math.mit.edu/icg/resources/teaching/18.085-spring2015/toeplitz.pdf
[7] https://sparse-plex.readthedocs.io/en/latest/book/opt/soft_thresholding.html
[8] Baier, Gerald, Wei He, and Naoto Yokoya. “Robust nonlocal low-rank SAR time series despeckling considering speckle correlation by total variation regularization.” IEEE Transactions on Geoscience and Remote Sensing 58.11 (2020): 7942–7954.

--

--

Jian Kang

Associate Professor in Soochow University, who interests in signal/image processing of remote sensing. Homepage: https://jiankang1991.github.io/