A fundamental element comprising a digital image is called a pixel, derived from the phrase "picture element". When working with digital imagery in GIS and remote sensing, it is important to know how pixels are created.
Basic format of digital imagery
A digital image is composed of a rectangular array of numbers that are used to form an image. It has both geometric and radiometric properties. Each number in the array represents a pixel. A pixel in its basic form is a number in an array that describes the brightness and color of a point on an image when it is displayed. The data format is also referred to as a raster or image format. The physics of what comprises the number in the array is a function of the geometry, the value of the pixel, and the color it represents and is related to its radiometry. It captures the interaction of light and the environment at a specific location in the geography.
Nature of light
Light behaves as either a wave form or a particle. Light is composed of a stream of particles called photons. These particles originate from the sun and flow down through the atmosphere where some are absorbed and some are scattered along the atmospheric path as they collide with air particles, water vapor, and other atmospheric constituents. This flow of photons, both direct from the sun and indirect from skylight, is called down welling radiation or irradiance. Then the photons hit a target object on the surface of the Earth, where some photons are absorbed by the target and some are reflected. Those that are reflected are transmitted up to a sensor, some are transmitted through the atmosphere, and some are again absorbed. The percentage that is measured by the sensor is called transmittance.
Light's interaction with the sensor
The photons that reach the front of the sensor are collected by the lens and focused onto a focal plane at the back of the sensor body. The focal plane is a collection of physical cells, such as a CCD array, that are sensitive to the photons. These are the physical devices that measure the light coming into the sensor and produce the number that represents the pixel of a digital image. They are referred to as wells because they act like buckets that collect the photons. Photons behave as physical particles, and the buckets that collect them in the sensor have two important properties. The first is the ability to capture the photon. This property is called quantum efficiency. If 100 photons hit the bucket and 40 of them are collected, the array has a quantum efficiency of 40 percent. The other important property is that of well depth or the capacity of the bucket. A bucket might have a capacity of 60,000 photons. If more than 60,000 photons are collected by the bucket, the remainder overflows. Sometimes they spill over and are lost; sometimes they spill over to adjacent buckets and cause a condition called flare. Flare results in neighboring pixels being overly bright; however, most modern focal planes have mechanisms to mitigate this effect. Since sunlight is constantly streaming in, the number of photons hitting the physical array is controlled with a device called a shutter. This is a physical device that opens the lens to collect photons or closes it to prevent any coming in. The shutter can be either mechanical or electrical. In either case, it allows the bucket to collect photos for a short period in time, referred to as integration time or exposure time.
Light as an electromagnetic wave
Visible light is perceived as having color. This is a property of light that is characterized by the wave phenomenon of light, electromagnetic waves. Waves can be described as having a periodicity or frequency, and they have a wavelength. Different colors of light have different frequencies and wavelengths. The speed of light, frequency, and wavelength are related in the mathematical relationship where c is the speed of light, f is the frequency of light, and ý is the wavelength.
c = fý
For remote sensing and GIS applications, the wavelength of light is the determining aspect of its color. For instance, light with wavelengths of 400-500 nanometers (nm) is blue light, 500-600 nm is green, and 600-700nm is red, which is referred to as the visible portion of the electromagnetic spectrum. The electromagnetic spectrum is expansive and ranges from high energy gamma rays to low energy radio wave. Remote sensing generally uses the visible and microwave portions of the spectrum.
Sensors capture the spectrum of light
The buckets or physical pixels in a sensor can be sensitive to different wavelengths of light or different colors of light. This is achieved by filtering the light in some way before it is detected by the photosensitive physical pixel. The entire image chain from illumination source to pixel read out from the sensor is a filter built into the sensor to segregate light according to wavelength to separate the colors.
An image is composed of a single band or multiple bands of data. If it is a single band, it has a wavelength range that it has captured. If the pixel has captured a wide portion of the visible spectrum, it is called panchromatic. If it has multiple bands, typically three or more, it is called multispectral. If it has many bands, such as 100 or more, it is called hyperspectral. These bandwidths are narrower than the panchromatic bands, and isolate a particular portion of the spectrum. Each band represents a single portion of the spectral range of the light being reflected from the target.
In ArcGIS, multispectral imagery is displayed using the RGB Composite renderer in which each raster band is mapped to one of three color channels. For images with more than three raster bands, any of the three bands can be used to display the imagery. The three color channels are red, blue, and green. You can substitute any raster band for each of the channels.
Common natural color imagery consists of three bands in which the blue band is displayed in the blue channel, the green band is displayed in the green channel, and the red band is displayed in the red channel. Each pixel has three values associated with each of the colors, culminating in a composite color.
Pixels represent a location on the ground
In addition to its spectral characteristics, a pixel represents a ground location. The figure below illustrates the relationship between the physical pixel in the sensor and the effective area the pixel represents on the ground. This relationship is a function of the geometry of the sensor, and the geometric aspects of it at the precise moment that the image was captured. The size of what a pixel represents on the ground is known as the ground sample distance (GSD). In the sensor, the boundaries between pixels are fixed and discreet. However, the pixel on the ground is not defined clearly because the atmosphere and the optics blur and scatter the light. Instead the pixels tend to overlap on the ground. The mathematical function that describes what ends up in a physical pixel in the sensor is point spread function. If the point spread function is large, the resulting imagery is blurry. If the point spread function is small, the imagery is well defined with sharp edges.
Storing pixel values
Since pixels are digital numbers stored in computer memory, the values are separate and discrete. When photons are sensed, they produce an electrical charge, which is an analog signal or a continuous value. When the pixel is read from the CCD array chip, it is converted to a discrete number through an analog to digital convertor. When these values are converted, they are assigned values typically between 8 to 14 bits of information. The limiting factor is the quality of the electronics. That means that an image has from 256 to16,384 values. Typically, modern sensors have 12-bit A/D convertors, which yield 4,096 possible gray levels.
A physical pixel on the focal plane of a sensor absorbs photons that become an electrical charge. That charge is converted to a number and placed into an array, or raster format. Because the position and altitude of the sensor at the exact moment of exposure is precisely known, the pixel's precise location on the ground is also known.
For multispectral and hyperspectral images, the pixel values in each of the bands constitute a spectral profile for that location on the ground. Each type of material on the ground that is imaged—such as the type of vegetation, soil, or building material—has a unique spectral profile, also referred to as spectral signatures. There are many techniques to normalize pixel gray levels in the image to provide consistency and facilitate analysis of ground features and materials based on spectral analysis.
The value of a pixel is a measurement of sensed radiation that is associated with a specific location on the ground. Remote sensing uses that information to analyze a feature or phenomena at that location. A digital image is more than a pretty picture; it is a radiometric and photogrammetric measurement. The ability to analyze pixels allows remote sensing analysts to derive significant types of geographic information.
For more information, see the following: