For some remote sensing instruments, the distance between the target being imaged and the platform, plays a large role in determining the detail of information obtained and the total area imaged by the sensor. Sensors onboard platforms far away from their targets, typically view a larger area, but cannot provide great detail. Compare what an astronaut onboard the space shuttle sees of the Earth to what you can see from an airplane. The astronaut might see your whole province or country in one glance, but couldn't distinguish individual houses. Flying over a city or town, you would be able to see individual buildings and cars, but you would be viewing a much smaller area than the astronaut. There is a similar difference between satellite images and airphotos.
The detail discernible in an image is dependent on the spatial resolution of the sensor and refers to the size of the smallest possible feature that can be detected. Spatial resolution of passive sensors (we will look at the special case of active microwave sensors later) depends primarily on their Instantaneous Field of View (IFOV). The IFOV is the angular cone of visibility of the sensor (A) and determines the area on the Earth's surface which is "seen" from a given altitude at one particular moment in time (B). The size of the area viewed is determined by multiplying the IFOV by the distance from the ground to the sensor (C). This area on the ground is called the resolution cell and determines a sensor's maximum spatial resolution. For a homogeneous feature to be detected, its size generally has to be equal to or larger than the resolution cell. If the feature is smaller than this, it may not be detectable as the average brightness of all features in that resolution cell will be recorded. However, smaller features may sometimes be detectable if their reflectance dominates within a articular resolution cell allowing sub-pixel or resolution cell detection.
As we mentioned in Chapter 1, most remote sensing images are composed of a matrix of picture elements, or pixels, which are the smallest units of an image. Image pixels are normally square and represent a certain area on an image. It is important to distinguish between pixel size and spatial resolution - they are not interchangeable. If a sensor has a spatial resolution of 20 metres and an image from that sensor is displayed at full resolution, each pixel represents an area of 20m x 20m on the ground. In this case the pixel size and resolution are the same. However, it is possible to display an image with a pixel size different than the resolution. Many posters of satellite images of the Earth have their pixels averaged to represent larger areas, although the original spatial resolution of the sensor that collected the imagery remains the same.
Images where only large features are visible are said to have coarse or low resolution. In fine or high resolution images, small objects can be detected. Military sensors for example, are designed to view as much detail as possible, and therefore have very fine resolution. Commercial satellites provide imagery with resolutions varying from a few metres to several kilometres. Generally speaking, the finer the resolution, the less total ground area can be seen.
The ratio of distance on an image or map, to actual ground distance is referred to as scale. If you had a map with a scale of 1:100,000, an object of 1cm length on the map would actually be an object 100,000cm (1km) long on the ground. Maps or images with small "map-to-ground ratios" are referred to as small scale (e.g. 1:100,000), and those with larger ratios (e.g. 1:5,000) are called large scale.
Did you know?
If the IFOV for all pixels of a scanner stays constant (which is often the case), then the ground area represented by pixels at the nadir will have a larger scale then those pixels which are off-nadir. This means that spatial resolution will vary from the image centre to the swath edge.
1. Look at the detail apparent in each of these two images. Which of the two images is of a smaller scale? What clues did you use to determine this? Would the imaging platform for the smaller scale image most likely have been a satellite or an aircraft?
The answer is ...
2. If you wanted to monitor the general health of all vegetation cover over the Canadian Prairie provinces for several months, what type of platform and sensor characteristics (spatial, spectral, and temporal resolution) would be best for this and why?
The answer is ...
Whiz quiz - answer
Answer 1: The image on the left is from a satellite while the image on the right is a photograph taken from an aircraft. The area covered in the image on the right is also covered in the image on the left, but this may be difficult to determine because the scales of the two images are much different. We are able to identify relatively small features (i.e. individual buildings) in the image on the right that are not discernible in the image on the left. Only general features such as street patterns, waterways, and bridges can be identified in the left-hand image. Because features appear larger in the image on the right and a particular measurement (eg. 1 cm) on the image represents a smaller true distance on the ground, this image is at a larger scale. It is an aerial photograph of the Parliament Buildings in Ottawa, Canada. The left-hand image is a satellite image of the city of Ottawa.
Answer 2: A satellite sensor with large area coverage and fairly coarse spatial resolution would be excellent for monitoring the general state of vegetation health over Alberta, Saskatchewan, and Manitoba. The large east-to-west expanse would be best covered by a sensor with a wide swath and broad coverage. This would also imply that the spatial resolution of the sensor would be fairly coarse. However, fine detail would not really be necessary for monitoring a broad class including all vegetation cover. With broad areal coverage the revisit period would be shorter, increasing the opportunity for repeat coverage necessary for monitoring change. The frequent coverage would also allow for areas covered by clouds on one date, to be filled in by data collected from another date, reasonably close in time. The sensor would not necessarily require high spectral resolution, but would at a minimum, require channels in the visible and near-infrared regions of the spectrum. Vegetation generally has a low reflectance in the visible and a high reflectance in the near-infrared. The contrast in reflectance between these two regions assists in identifying vegetation cover.The magnitude of the reflected infrared energy is also an indication of vegetation health. A sensor on board the U.S. NOAA (National Oceanographic and Atmospheric Administration) series of satellites with exactly these types of characteristics is actually used for this type of monitoring over the entire surface of the Earth!