Plan4all (Dmitrii Kozhukh)

Last week the Inspire Kampala Hackathon has ended. One of the teams at this hackathon was team working on the extension of Open Land Use database to African continent. The team work was directed in three directions:

1)  Research available data sources about land use / land cover in Africa
2)  Create the table of the complete administrative division of African countries (it was needed because the Open Land Use Map database is kept in tables that are structured by hierarchy of administrative units on the principle of data inheritance from smaller to higher levels )
3) Use satellite imagery to derive information about land use / land cover

The complete overview of all these three tasks could be found in this video: (from from the time 26:30)

However, here we will describe a way to derive information about land cover from satellite imagery.

The whole process was devided in several steps.

Firstly, it was needed to decide , generally, which algorithm will be used to detect different land cover objects in a satellite image.

Due to the fact, that in previous hackathons, some pixel based classification approaches were tried (approaches that takeinto account just color value of pixel), this time it was decided to select something else. The choice was made to try some image segmentation technique to detect land cover objects and then to use histograms of pixel values, that compose each object, to find a land cover class for those objects.

So this was the general idea.

Then going into practicalities, at first the test area was chosen. The criteria were to choose the area in Africa, that would have diverse land covers in it (urban , rural and natural ones), also to make it easier to find imagery – it shouldn‘t have been an area in the rainforests were it rains a lot (we need clear images), and also due to the fact that it is just an experiment – the area shouldn‘t have been big (maximum approximately about 1 000 x 1 000 pixels), so that experiments could have been run without waiting time even on average notebook.

There were areas from several African countries chosen – but in this concrete example presented here we look at the suburban area near Johannesburg, South Africa.

The Sentinel-2 image from 1st of December 2019 (it is summer in southern hemisphere and also image has 0 clouds) for this area was downloaded from Copernicus Open Access Hub .

It is false color representation of the image where red channel is represented by near infrared spectre, green channel by green spectre and blue channel by blue spectre.

After this step Superpixel and Affinity Propagation algorithms where run to detect segments. The superpixel algorithms clusters the image into segments (superpixels) based on pixel values distance in CIELAB color space (lightness axis, green-red axis, blue-yellow axis) and euclidean distance between pixels. The formula for this combined distance is:

where dlab is color pixel value distance, dxy is euclidean distance, S is the grid step size (distance between original centroids of superpixels) and m is so called compactness parameter that basically sets the weight for euclidean distance component. In this original version (simple linear iterative clustering (SLIC) ) of algorithm the compactness parameter was constant for all the clusters and has been predfined by user. In some cases, where part of image is very fragmented and part is very smooth it is difficult to set constant compactness value, and due to this, SLICO algorithm that uses adoptative value for compactness parameter for each cluster was developed. And both of these algorithms (SLIC and SLICO) are implemented in OpenImageR package for R.

So then when superpixels are computed, it is further possible to use affinity propagation algorithm to join similair superpixels into the final segments. The description of the algorithm is provided here: The advantage of it is that it finds clusters without the need of number of clusters be predefined and also is easy to implement.

In R SuperpixelImageSegmentation package these two algorithms (superpixels and affinity propagation) could be combined, and the line of code like:
init = Image_Segmentation$new()
segments = init$spixel_segmentation(input_image = false_color_image,
superpixel = 600,
AP_data = TRUE,
use_median = TRUE,
sim_wL = 3,
sim_wA = 10,
sim_wB = 10,
sim_color_radius = 10,
verbose = TRUE)

will give the final image segments. The parameter superpixel is the initial grid size, sim_wL, sim_wA, sim_wB are customly specified to set weights for LAB color components.

The result we get could be compared with initial image:

It could be said that the most visible objects of land cover were detected. In the next step the analysis of histogram values of pixels in each segment should give an answer which object corresponds to which land cover type. This will be the topic of the next blog.

References to read: