How to use deep network on sentinel-1 SAR images for change detection?

This article is a quick tutorial for implementing land cover system on SAR images using Object segmentation based on Deep Learning.

Application of Sentinel-1 in Disaster Management

Remote sensing makes it possible to measure the impact of human activity on the environment. Thanks to remote sensing, it is possible to determine the composition of soils, to categorize the types of vegetation on a territory, and to map the consequences of human activities as urbanization. To do this, it is necessary to be able to efficiently analyze the images to produce for example land use maps that represent a mapping of homogeneous types of environments (urban areas, agricultural areas, forests, …) of the surface of the land. Emerged lands.

Radio Detection And Ranging, sentinel-1

The radar,because of the carrier frequency range of the signals used, have the advantage of making it possible to make observations independently of the conditions of illumination and of nebulosity in the broad sense. Moreover, these instruments made it possible to foresee perspectives of observation of more complex phenomena such as oil leaks, landslide, etc. But, despite the various Earth observation missions that have accumulated a lot of important information over time, the images obtained from the radar are still very rare and present a real challenge of semantic interpretation for those available. This is due in particular to the structural complexity of these images resulting from their acquisition principles and to the high cost of access to data. This aspect of access to the data still tends to improve thanks to the Copernicus project.

synthetic aperture radar(SAR),

They are electronic devices that can image the electromagnetic reflectivity of objects or environments with high spatial accuracy. In order to overcome the limitation of conventional radar, the principle of opening synthesis has been implemented. This is based on the analysis of the phase variation of the signal reflected by an object, when one changes measurement position. Signal processing techniques make it possible to focus the response of an object in a bidirectional plane with an increased resolution so the order of magnitude was first decametric, then metric and decimetric to date. They are widely used in remote sensing of natural environments, monitoring and mapping of territories and activities.

Change detection with SAR image

The change detection map from radar images remains a long and arduous task. Although the generation of such a map from optical images has been a breakthrough using Big Data, systems based on radars have not seen the same progress, probably because of the difficulties related to the structural complexity of these images.This represents above all an asset to develop and experiment new advanced treatment chains taking advantage of new techniques, which made their evidence, to be able to effectively use this new large data flow combined with the computing power available for image processing which has also grown.

statistical tool for design and analysis: SAR-DNNClassifier

Multi Layer Perceptron, are able to approximate complex nonlinear functions in order to process large data. As part of our work, the approach is that to present a vectorized image, that is to say in the form of a vector whose dimension is equal to the number of pixels of the image, at the entrance of the deep network.

Adam optimization function, Choosing a good optimization algorithm for a model can make the difference between good results in minutes, hours, or days. The Adam optimization algorithm, derivation and adaptive moment estimation, is an extension of the stochastic gradient descent for deep learning applications. It has the particularity of iteratively updating the weights of the connections within the network according to the learning data, on the one hand, and in addition to keeping the sum of the previous square gradients, it also preserves the previous gradients (similar to the momentum) in exponential decomposition.

Since our problem is a non-convex optimization problem, applying it to Adam has the following advantages:

  • Simple to implement
  • Efficiency in the calculations
  • Few memory requirements
  • invariant to the scaling of the diagonal of the gradients
  • Well adapted to important problems in terms of data and / or parameters
  • suitable for non-stationary purposes
  • suitable for very noisy or very sparse gradient problems
  • hyper-parameters have an intuitive interpretation and generally require little adjustment

Software and libraries Used in the implementation

cloud processing sequence

Google Earth Engine

The colaboratory platform

The Tensorflow library

[Google Earth Engine: ] How to export SAR train/test data and Image data?

Google Earth Engine is a global cloud-based geospatial analytics platform that provides access to Google’s massive computational capabilities for studying a variety of high impact societal issues, including deforestation, drought, natural disasters, food security, water management, climate monitoring and environmental protection.

Firstly, connected you on the earth platform Google Earth Engine.

1. Select images from copernicus projet. To create a homogeneous subset of Sentinel-1 data, it will usually be necessary to filter the collection using metadata properties

var imgVV = ee.ImageCollection(‘COPERNICUS/S1_GRD’)        
.filter(ee.Filter.listContains(‘transmitterReceiverPolarisation’, ‘VV’))
.filter(ee.Filter.eq(‘instrumentMode’, ‘IW’))
.map(function(image) {
var edge = image.lt(-30.0);
return image;
});

2. Orbit properties

var desc = imgVV.filter(ee.Filter.eq(‘orbitProperties_pass’, ‘DESCENDING’));

3. Map the function over a period(one year) of data and take the median

var period = ee.Filter.date(‘2018–01–01’, ‘2018–12–31’);
var descChange = ee.Image.cat(desc.filter(period).mean());

4. To display the result

Map.setCenter(9.69,4.07, 12); //geographical coordinate of the area and the value of the zoom
Map.addLayer(descChange, {bands: [‘VV’,’VH’,’angle’],min: -25, max:5}, ‘MY SAR Mean Desc’, true);

For learning a description corresponding to the ground truth (fig 2.4) must be made.

e.g:Label  | Description      | color
0 vegetation green
1 ground yellow
2 water sky blue
3 vase dark blue
4 mangrove_swamp purple
a) [Export region] b) [Label] c) [Export region + Label]
Video showing how to add labels.

5. Merge features

var newfc = vegetation.merge(ground).merge(water).merge(vase).merge(mangrove_swamp)

6. TRAIN/TEST Data

var bands = [‘VV’,’VH’,’angle’];var training = descChange.select(bands).sampleRegions({
collection:newfc,
properties:[‘landcover’],
scale:30
}).randomColumn();
//PARTITION OF TRAINING DATA
// Approximately 70% of our training data
var trainingPartition = training.filter(ee.Filter.lt('random', 0.7));
// Approximately 30% of our training data
var testingPartition = training.filter(ee.Filter.gte('random', 0.7));

7. Export the training and testing data to TFRecord format.

var outputFeatures = Array.from(bands);
outputFeatures.push(‘landcover’);
var link = ‘9a26cef21ab34f6257d0a250882124fc’; //unique ID
var train_desc = ‘tf_trainFinal_’ + link;
var test_desc = ‘tf_testFinal_’ + link;
// choose a) OR b)//a) Export to cloud Storage
Export.table.toCloudStorage({
collection: trainingPartition,
description: train_desc,
bucket: ‘mydir-training-temp’,
fileFormat: ‘TFRecord’,
selectors: outputFeatures
});
Export.table.toCloudStorage({
collection: testingPartition,
description: test_desc,
bucket: ‘mydir-training-temp’,
fileFormat: ‘TFRecord’,
selectors: outputFeatures
});
//b) Export to Drive
Export.table.toDrive({
collection: trainingPartition,
description: train_desc,
fileFormat: ‘TFRecord’,
selectors: outputFeatures
});
Export.table.toDrive({
collection: testingPartition,
description: test_desc,
fileFormat: ‘TFRecord’,
selectors: outputFeatures
});

8. Evaluation Data

var evaluation = descChange.select(bands);
var image_desc = 'tf_image_' + link;
// choose a) OR b)// a)
Export.image.toCloudStorage
({
image: image.select(bands),
description: image_desc,
scale: 30,
fileFormat: 'TFRecord',
bucket: 'mydir-training-temp',
region: exportRegion,
formatOptions: {
'patchDimensions': [256, 256],
maxFileSize: 104857600,
compressed: true,
},
});
// b)
Export.image.toDrive
({
image: evaluation,
description: image_desc,
scale: 30,
fileFormat: 'TFRecord',
region: exportRegion,
formatOptions: {
'patchDimensions': [256, 256],
maxFileSize: 104857600,
compressed: true,
},
});

[Colaboratory Platform:] How to apply Deep network on our data?

1. Configure The Environement

Earth Engine,

#@install Earth Engine
!pip install earthengine-api
#@Authentication to Earth Engine for connection
import ee
try:
ee.Initialize()
print('The Earth Engine package initialized successfully!')
except ee.EEException as e:
print('The Earth Engine package failed to initialize!')
!earthengine authenticate
except:
print("Unexpected error:", sys.exc_info()[0])
raise

Tensorflow,

#@title Import tensorflow library
import tensorflow as tf
#@title Import math library
import math
#@title Test if Tensorflow is working
hello = tf.constant('hello world')
with tf.Session() as sess:
print(sess.run(hello))
#@title Install the PyDrive library

Google Drive,

# This only needs to be done once per notebook
!pip install -U PyDrive
#@title import authentification libraries
from google.colab import auth
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from oauth2client.client import GoogleCredentials
#@title Authenticate for connection
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)

2. Inspect the Input Data

#@title Load training/testing data from Earth Engine exports
# Specify the training file exported from EE.
# If you wish to use your own data, then
# replace the file ID, below, with your own file.
trainFileId = '1Nmmy8yYMoDIv3qah6W1eEZkucoBwhvT-'
trainDownload = drive.CreateFile({'id': trainFileId})
# Create a local file of the specified name.
tfrTrainFile = 'training.tfrecord.gz'
trainDownload.GetContentFile(tfrTrainFile)
print('Successfully downloaded training file?')
print(tf.gfile.Exists(tfrTrainFile))
# Specify the test file.
# If you wish to use your own data, then
# replace the file ID, below, with your own file.
testFileId = '19FuRN6nm_nTU8tbHLdCrSsLShylIklyp'
testDownload = drive.CreateFile({'id': testFileId})
# Creates a local file of the specified name.
tfrTestFile = 'testing.tfrecord.gz'
testDownload.GetContentFile(tfrTestFile)
print('Successfully downloaded testing file?')
print(tf.gfile.Exists(tfrTestFile))
print('Content of the working directory:')
!ls

Inspect the TFRecord dataset

#@title Inspect the TFRecord dataset
driveDataset = tf.data.TFRecordDataset(tfrTrainFile, compression_type='GZIP')
iterator = driveDataset.make_one_shot_iterator()
foo = iterator.get_next()
with tf.Session() as sess:
print(sess.run([foo]))

3. Define the structure of the training/testing data

#@title Define the structure of the training/testing data
# Names of the features.
bands = ['VV','VH','angle']
label = 'landcover'
featureNames = list(bands)
featureNames.append(label)
print(featureNames)
# Feature columns
columns = [tf.FixedLenFeature(shape=[1], dtype=tf.float32) for k in featureNames]
# Dictionary with names as keys, features as values.
featuresDict = dict(zip(featureNames, columns))
print(featuresDict)

4. Make and test a parsing function

#@title Make and test a parsing function
def parse_tfrecord(example_proto):
parsed_features = tf.parse_single_example(example_proto, featuresDict)
labels = parsed_features.pop(label)
return parsed_features, tf.cast(labels, tf.int32)
# Map the function over the dataset
parsedDataset = driveDataset.map(parse_tfrecord, num_parallel_calls=5)
print(parsedDataset)
iterator = parsedDataset.make_one_shot_iterator()
foo = iterator.get_next()
print(foo)
with tf.Session() as sess:
print(sess.run([foo]))

5. DATA AUGMENTATION,Make functions to add additional features

#@title Make functions to add additional features
# Compute normalized difference of two inputs. If denomenator is zero, add a small delta.
#a)
def normalizedDifferenceSurfacePolarisation(a, b):
nd = (a - b)
return nd
#b)
def normalizedSumSurfacePolarisation(a, b):
nd = (a + b)
return nd
#c)
def normalizedDivisonSurfacePolarisation(a, b):
nd = a / b
return nd
# Add normalized to the dataset. Shift the label to zero.
def addFeatures(features, label):
features['x'] = normalizedDifferenceSurfacePolarisation(features['VV'], features['VH'])
features['y'] = normalizedSumSurfacePolarisation(features['VV'], features['VH'])
features['z'] = normalizedDivisonSurfacePolarisation(features['VH'],features['VV'])
return features, label

6. Model

#@title Make an input functiondef tfrecord_input_fn(fileName, numEpochs=None,shuffle=True,batchSize=None):
dataset
= tf.data.TFRecordDataset(fileName, compression_type='GZIP')

# Map the parsing function over the dataset
dataset = dataset.map(parse_tfrecord, num_parallel_calls=5)

# Add additional features.
dataset = dataset.map(addFeatures)

# Shuffle, batch, and repeat.
if shuffle:
dataset = dataset.shuffle(buffer_size=batchSize * 10)
dataset = dataset.batch(batchSize)
dataset = dataset.repeat(numEpochs)

# Make a one-shot iterator.
iterator = dataset.make_one_shot_iterator()
features, labels = iterator.get_next()
return features, labels

7. Make and train a classifier

#@title Make and train a classifier
#['VV','VH','angle','x','y'..etc] specify the data on which
# the classifier should train
inputColumns = {
tf.feature_column.numeric_column(k) for k in ['VV','VH']
}
#Activation function Softmax
activation_fn = tf.nn.softmax
#optimizer function Adam
optimizer = tf.train.AdamOptimizer(1e-4)
#classifier
#specify the n_classes according to the number of labels
#model_dir =
the directory in which the model will be save
#save_summary_steps =
after number_step the model state will be
#saved for graph visualization with tensorboard
#[10,10]= deep network architecture
classifier = tf.estimator.DNNClassifier(feature_columns=inputColumns,hidden_unit=[10,10],n_classes=5,model_dir='output_folder',optimizer=optimizer,activation_fn= tf.nn.relu,batch_norm= True,config=tf.estimator.RunConfig().replace(save_summary_steps=10))classifier.train(input_fn=lambda:tfrecord_input_fn(fileName=tfrTrainFile, numEpochs=10, batchSize=1),steps=10000)!ls output_folder

TensorBoard for status visualization

#@title TensorBoard for statistic
%load_ext tensorboard.notebook
%tensorboard --logdir output_folde

8. Evaluate the classifier

#@title Evaluate the classifier
accuracy_score = classifier.evaluate(input_fn=lambda: tfrecord_input_fn(fileName=tfrTestFile, numEpochs=1, batchSize=1, shuffle=False))['accuracy']

9. Make predictions on the test data

#@title Make predictions on the test data
import itertools
# Do the prediction from the trained classifier.
checkPredictions = classifier.predict(input_fn=lambda: tfrecord_input_fn(fileName=tfrTestFile, numEpochs=1, batchSize=1, shuffle=False))
# Make a couple iterators.
iterator1, iterator2 = itertools.tee(checkPredictions, 2)
#Iterate over the predictions, printing the class_ids and posteriors
for pred_dict in iterator1:
class_id = pred_dict['class_ids']
probability = pred_dict['probabilities']
print(class_id, probability)

10. Find the exported image and JSON files in Drive

#@title Find the exported image and JSON files in Drive
# tf_image_9a26cef21ab34f6257d0a250882124fc is the name of the
# evaluation data(Earth Engine part 8) that was exported. you will
# find it in your google drive(it will be different)
file_list = drive.ListFile({
# You have to know this base filename from wherever you did the
# export.
'q': 'title contains "tf_image_9a26cef21ab34f6257d0a250882124fc"'}).GetList()
fileNames = []
jsonFile = None
for gDriveFile in file_list:
title = gDriveFile['title']
# Download to the notebook server VM.
gDriveFile.GetContentFile(title)
# If the filename contains .gz, it's part of the image.
if (title.find('gz') > 0):
fileNames.append(gDriveFile['title'])
if (title.find('json') > 0):
jsonFile = title
# Make sure the files are in the right order.
fileNames.sort()
# Check the list of filenames to ensure there's nothing
# unintentional in there.
print(fileNames)

11. Make an input function for prediction

#@title Make an input function for exported image data
# You have to know the following from your export.
PATCH_WIDTH = 256
PATCH_HEIGHT = 256
PATCH_DIMENSIONS_FLAT = [PATCH_WIDTH * PATCH_HEIGHT, 1]
# Make sure this matches the exported
bands = ['VV', 'VH', 'angle']
# Note that the tensors are in the shape of a patch,
# one patch for each band.
columns = [tf.FixedLenFeature(shape=PATCH_DIMENSIONS_FLAT, dtype=tf.float32) for k in bands]
featuresDict = dict(zip(bands, columns))# This function adds NDVI to a feature that doesn't have a label.
def addServerFeatures(features):
return addFeatures(features, None)[0]
# This input function reads in the TFRecord files exported
# from an image.
# Note that because the pixels are arranged in patches,
# we need some additional
# code to reshape the tensors.
def predict_input_fn(fileNames):
# Note that you can make one dataset from many files
# by specifying a list.
dataset = tf.data.TFRecordDataset(fileNames, compression_type='GZIP')
def parse_image(example_proto):
parsed_features = tf.parse_single_example(example_proto, featuresDict)
return parsed_features
dataset = dataset.map(parse_image, num_parallel_calls=5)

# Break our long tensors into many littler ones
dataset = dataset.flat_map(lambda features: tf.data.Dataset.from_tensor_slices(features))

# Add additional features (NDVI).
dataset = dataset.map(addServerFeatures)
# Read in batches corresponding to patch size.
dataset = dataset.batch(PATCH_WIDTH * PATCH_HEIGHT)
# Make a one-shot iterator.
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()

# Do the prediction from the trained classifier.
predictions = classifier.predict(input_fn=lambda: predict_input_fn(fileNames))

12. Export the model

#@title Define output names
# INSERT YOUR USERNAME HERE:
username = 'username'
baseName = 'gs://folder_name/' + usernameoutputImageFile = baseName + '_predictions.TFRecord'outputJsonFile = baseName + '_predictions.json'print('Writing to: ' + outputImageFile)

13. Apply the model on our evaluation Images dataset

#@title Make predictions on the image data, write to a fileiter1, iter2 = itertools.tee(predictions, 2)# Iterate over the predictions, printing the class_ids and posteriors.
# This is just to examine the first prediction.
for pred_dict in iter1:
print(pred_dict)
break # OK
# Instantiate the writer.
writer = tf.python_io.TFRecordWriter(outputImageFile)
# Every patch-worth of predictions we'll dump an example
# into the output
# file with a single feature that holds our predictions. Since are predictions# are already in the order of the exported data, our patches we create here# will also be in the right order.
# make sure this(patch) matche the number of labels + 1(probability)
patch = [[], [], [], [],[],[]]
for pred_dict in iter2:
patch[0].append(pred_dict['class_ids'])
patch[1].append(pred_dict['probabilities'][0])
patch[2].append(pred_dict['probabilities'][1])
patch[3].append(pred_dict['probabilities'][2])
patch[4].append(pred_dict['probabilities'][3])
patch[5].append(pred_dict['probabilities'][4])

# Once we've seen a patches-worth of class_ids...
if (len(patch[0]) == PATCH_WIDTH * PATCH_HEIGHT):

# Create an example
example = tf.train.Example(
features=tf.train.Features(feature={
'prediction': tf.train.Feature(
int64_list=tf.train.Int64List(
value=patch[0])),
'vegProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[1])),
'groundProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[2])),
'waterProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[3])),
'vase_waterProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[4])),
'mangroveProb': tf.train.Feature(
float_list=tf.train.FloatList(
value=patch[5])),
}
)
)
# Write the example to the file and clear our patch array
# so it's ready for
# another batch of class ids
writer.write(example.SerializeToString())
patch = [[], [], [], [],[],[]]
writer.close()

14. Export the result(JSON) to cloud storage

#@title Copy the JSON file to a cloud storage bucket
# Copy the JSON file so it has the same base name as the image.
!gsutil cp {jsonFile} {outputJsonFile}
!gsutil ls gs://folder_name

[Colaboratory-to-Earth Engine] Export the prediction on Earth Engine for visualization

#@title Install the Earth Engine API
!pip install earthengine-api
!earthengine authenticate --quiet
#@title Authentication for Earth Engine
!earthengine authenticate --authorization-code= replace_with_generated_key_here
#@title Get earthengine upload help
!earthengine upload image -h
#@title Upload the classified image to Earth Engine
# Change the filenames to match your personal user folder
# in Earth Engine.
# replace folder_name with your one
outputAssetID = 'users/folder_name'
!earthengine upload image --asset_id={outputAssetID} {outputImageFile} {outputJsonFile}

Check the status of the asset ingestion

#@title Check the status of the asset ingestion
import ee
ee.Initialize()tasks = ee.batch.Task.list()print(tasks)
Example of Earth Engine Result according to labels described above

Conclusions

Hope you liked this post. Give me a ❤️ if you do. Hopefully, you can now train your own SAR-dnnclassifier for change detection. Follow me here on Medium @ghomsiDev or on twitter @ghomsiDev to stay up-to-date with my work. Let Enjoy the radar imagery

Thank you for your time!

:)

--

--

KEMCHE GHOMSI Adrien Arnaud
How to use deep network on SAR images

Chief Research Officer (CRO) at Weloobe | Computer Engineer in Network and Distributed Services | AI, Signals, Images and Synthesis | Research Enthusiast.