Skip to main content

Data Processing, Interpretation, and Analysis

Remote sensing data acquired from instruments aboard satellites require processing before the data are usable by most researchers and applied science users.

Most raw NASA Earth observation data (Level 0, see data processing levels) are processed at NASA's Science Investigator-led Processing Systems (SIPS) facilities. All data are processed to at least a Level 1, but most have associated Level 2 (derived geophysical variables) and Level 3 (variables mapped on uniform space-time grid scales) products. Many even have Level 4 products. NASA Earth science data are available fully, openly, and without restriction to data users.

Most data are stored in Hierarchical Data Format (HDF) or Network Common Data Form (NetCDF) format. Numerous data tools are available to subset, transform, visualize, and export to various other file formats.

Once data are processed, they can be used in a variety of applications, from agriculture to water resources to health and air quality. A single instrument will not address all research questions within a given application. Users often need to leverage multiple instruments and data products to address their questions, bearing in mind the limitations of data provided by different spectral, spatial, and temporal resolutions.

Creating Satellite Imagery

Many instruments acquire data at different spectral wavelengths. For example, Band 1 of the OLI aboard Landsat 8 acquires data at 0.433-0.453 micrometers while the MODIS Band 1 acquires data at 0.620-0.670 micrometers. OLI has a total of 9 bands whereas MODIS has 36 bands, all measuring different regions of the electromagnetic spectrum. Bands can be combined to produce imagery of the data to reveal different features in the landscape. Often imagery of data are used to distinguish characteristics of a region being studied or to determine an area of study.

True-color images show Earth as it appears to the human eye. For a Landsat 8 OLI true-color (red, green, blue [RGB]) image, the sensor Bands 4 (Red), 3 (Green), and 2 (Blue) are combined. Other spectral band combinations can be used for specific science applications, such as flood monitoring, urbanization delineation, and vegetation mapping. For example, creating a false-color image from data acquired by the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (Suomi NPP) platform, using bands M11, I2, and I1 is useful for distinguishing burn scars from low vegetation or bare soil as well as for exposing flooded areas. To see more band combinations from Landsat sensors, check out NASA Scientific Visualization Studio's video Landsat Band Remix or the NASA Earth Observatory article Many Hues of London. For other common band combinations, see NASA Earth Observatory's How to Interpret Common False-Color Images, which provides common band combinations along with insight into interpreting imagery.

Image
Image Caption

Fire scars reflect strongly in Landsat's Band 7, which acquires data in the shortwave infrared range. The fire scar is not visible in the left image, which is a standard true-color image. The fire scar stands out clearly in red in the right image, which is a false-color infrared image.

Image Interpretation

Once data are processed into imagery with varying band combinations these images can aid in resource management decisions and disaster assessment. This requires proper interpretation of the imagery. There are a few strategies for getting started (adapted from NASA Earth Observatory article How to Interpret a Satellite Image: Five Tips and Strategies):

  1. Know the scale. There are different scales based on the spatial resolution of the image and each scale provides different features of importance. For example, when tracking a flood, a detailed, high-resolution view will show individual homes and businesses surrounded by water. The wider landscape view shows the parts of a county or metropolitan area that are flooded and perhaps the source of the water. An even broader view would show the entire region—the flooded river system or the mountain ranges and valleys that control the flow. A hemispheric view would show the movement of weather systems connected to the floods.
  2. Look for patterns, shapes and textures. Many features are easy to identify based on their pattern or shape. For example, agricultural areas are generally geometric in shape, usually circles or rectangles. Straight lines are typically human-created structures, like roads or canals.
  3. Define colors. When using color to distinguish features, it's important to know the band combination used in creating the image. True- or natural-color images are created using band combinations that replicate what we would see with our own eyes if looking down from space. Water absorbs light so it typically appears black or blue in true-color images; sunlight reflecting off the water surface might make it appear gray or silver. Sediment can make water color appear more brown, while algae can make water appear more green. Vegetation ranges in color depending on the season: in the spring and summer, it's typically a vivid green; fall may have orange, yellow, and tan; and winter may have more browns. Bare ground is usually some shade of brown, although this depends on the mineral composition of the sediment. Urban areas are typically gray from the extensive use of concrete. Ice and snow are white in true-color imagery, but so are clouds. When using color to identify objects or features, it's important to also use surrounding features to put things in context.
  4. Consider what you know. Having knowledge of the area you are observing aids in the identification of these features. For example, knowing that an area was recently burned by a wildfire can help determine why vegetation may appear different in a remotely-sensed image.

Quantitative Analysis

Different land cover types can be discriminated more readily by using image classification algorithms. Image classification uses the spectral information of individual image pixels. A program using image classification algorithms can automatically group the pixels in what is called an unsupervised classification. The user can also indicate areas of known land cover type to "train" the program to group like pixels; this is called a supervised classification. Maps or imagery can also be integrated into a geographic information system (GIS) and then each pixel can be compared with other GIS data, such as census data. View more information on integrating NASA Earth science data into a GIS.

Space-based platforms also often carry a variety of instruments measuring biogeophysical parameters, such as sea surface temperature, nitrogen dioxide, or other atmospheric pollutants, winds, aerosols, and biomass. These parameters can be evaluated through statistical and spectral analysis techniques.