II.3 Complementarity of tools

Summary

II - WHAT INSTRUMENTS ARE USED FOR OBSERVATION?

 


3- COMPLEMENTARITY OF PLATFORMS AND TOOLS

The different platforms used to move sensors over the target area, such as satellites, aircraft, drones ..., each have their advantages and disadvantages. However, they are not competitors of each other. On the contrary, their different characteristics are just an advantage because they each generate specific data products whose joint use offers numerous advantages.

For instance, satellites are ideally suited to regularly observe large areas. Changes detected in this way can then be examined in more detail, for example, with hyperspectral sensors on board planes or drones. Drones can also be used, for example, to view objects from different angles to create even more detailed images from which 3D models can be derived.

Applications such as the Global Human Settlement Layer and the World Settlement Footprint, which provide maps of human settlements on a global scale, use both radar (Sentinel-1) and optical imagery (Sentinel-2) in their methodology.

Besides images coming from different platforms, we can also combine data from different types of sensors: radar or lidar and optical, high and low resolution, panchromatic and multispectral,... 

For instance, images from optical sensors are usually very well suited to automatically derive land cover maps from them using image classification algorithms. However, clouds are often a killjoy because they hide the Earth's surface from the sensor, so to speak. As a result, many areas cannot be recorded regularly and completely. 

Radar (SAR) sensors, on the other hand, emit their own microwave radiation that is unaffected by clouds. They can therefore generate images regardless of weather and seasonal or daily exposure conditions. This is very useful, for example, for making continuous maps of vast areas.


Raster representation of a digital surface model derived from a lidar point cloud (Kiuic, Mexico)

Lidar data allow us, for example, to create spatially detailed elevation models. This can be of the ground surface only (digital terrain model) or of the ground surface including the objects on it, such as trees or houses (digital surface model). Lidar point clouds, however, are difficult to visualise and interpret.

We can turn them into a grid with a colour scale to represent elevation, but it becomes even more spectacular if we "drape" a high-resolution image (e.g. a true colour composite, see True and false colour composites) on them. We can then view the terrain from all sides as if we were flying around in a helicopter ourselves.

Pléiades image (left) and terrain visualisation (right) draped on an elevation model derived from lidar data of the Puuc area in Yucatan, Mexico. Remains of Maya structures are clearly visible on the terrain visualisation; See also The lost world of the Mayans revealed by satellites (STEREO project LIMAMAL).

Finally, it remains necessary to combine remote sensing images with so-called in situ data or observations in the field. This is necessary, on the one hand, to calibrate scientific models or algorithms and, on the other hand, to check the results obtained and adjust them where necessary.


Terrain measurements with field spectrometers for the Stereo project BELAIR.