The sensors


Data acquisition


The sensors


Standing up to the radiometer’s intense gaze

The spectro-radiometer is used to analyse all the details of an electromagnetic spectrum. This instrument can analyse all of the frequencies of the spectrum. Other instruments measure the intensity of radiation in just a few frequency ‘windows’.

These instruments normally work by means of a sensitive element or detector that modulates the current passing through it in line with the electromagnetic energy that it receives. Different types of detector are used for the different wavelengths. Each machine is usually equipped with a single detector and thus takes readings on a certain wavelength interval. The result is a graph of the type shown before.

Several sensitive elements can be placed side by side to create a matrix of sensors. Each individual sensor acts like a spectro-radiometer, but if the (numerical) readings of each sensor are considered to be so many numerical values associate with a pixel in an image, the result is an imaging spectro-radiometer. For example, a spectro-radiometer composed of cells that are sensitive to thermal infrared waves will record higher values in hotter areas. If the pixel coding convention is ‘0 = black, 255 = white’, these hot spots will correspond to the palest areas in the image.


or the limits of the human eye

The wavelengths that are visible to the human eye, namely, 0.4-0.7 µm, make up a relatively narrow slice of the electromagnetic spectrum. An imaging radiometer operating in this part of the spectrum thus has special properties, since the recorded signal (light intensity) is reproduced in the display system by proportionate light intensities. This property corresponds to the way that television cameras operate.

A colour television camera is actually composed of three imaging radiometers that filter and record the red, green and blue frequencies, respectively. Three primary images are captured and processed individually before being merged by the display system, which works on the principle of red, green and blue additive synthesis.

There are two types of imaging sensor that operate in the visible light range: cathode ray tube (‘vidicon’) video cameras, which sweep the sensitive zone, and all the new models, which use arrays of sensitive elements called charge-coupled devices or CCDs.

For a very long time the most effective way to produce high-quality digital images was to record the images on photographic films, then digitise the frames using scanners.


In remote sensing

Aerial photography and the birth of remote sensing

The history of aviation started at practically the same time as the history of photography. The two quickly merged to give rise to aerial photography, the ancestor of modern-day remote sensing.

Remote sensing relied for years on aerial photographs; no one spoke about digital images. A plane equipped with special cameras would take off on a photographic mission and the films would be developed and analysed once the flight had ended. It was even possible to enlarge the field of analysis to use special films and filters so as to include frequency ranges normally invisible to the naked eye. This was the birth of infrared photography. All of the topographic maps produced since 1940 are based on aerial photographs.

As of the first manned space flights scientists tried to use the extraordinary vantage point offered by orbiting satellites to take photographs of the Earth, and the idea of having satellites take pictures of the Earth around the clock quickly took hold. Photography was a completely satisfactory technique, but a practical problem soon cropped up, that of recovering the pictures taken in space rapidly. Some of the first spy satellites that were used in the Cold War took photographs and ejected the exposed rolls of film with parachutes to break their fall to Earth. The ‘parcels’ where normally recovered by a plane before they hit the ground. However, this procedure was far from practical and large number of films were never recovered.

Digital images have the considerable advantage of being able to be radioed (as binary information) back to Earth, where they are then reconverted into images.
Digital sensors also make it possible to analyse parts of the electromagnetic spectrum that are out of bounds for photographic films. By multiplying the number of sensors you can also multiply the number of parts of the spectrum that are analysed, whereas colour photography is limited to analysing three spectral bands.

To improve the sensors’ resolution (the number of pixels per image), the engineers who developed the remote sensing satellites’ sensors made use of the fact that the satellites travel at a steady pace along their orbits. Rather than recording a complete image (a square) every few seconds, as one does from a plane when taking aerial photographs, certain satellites’ sensors record only a single line of pixels (usually by means of a linear array of CCD sensors) at right angles to their axis of travel. The system is set in such a way that over the few microseconds it takes to process and record this line of pixels the satellite moves the necessary distance to cover the next line. In this way, the satellite’s sensor sweeps the entire area to cover and creates an image line by line