What is pan sharpening?
Pan sharpening uses a higher-resolution panchromatic image (or raster band) to fuse with a lower-resolution multiband raster dataset. The result produces a multiband raster dataset with the resolution of the panchromatic raster, where the two rasters fully overlap.
Pan sharpening is a radiometric transformation available through a raster function or from a geoprocessing tool. Several image companies provide low-resolution, multiband images and higher-resolution, panchromatic images of the same scenes. This process is used to increase the spatial resolution and provide a better visualization of a multiband image using the high-resolution, single-band image.
Pan sharpening methods
There are five image fusion methods from which to choose to create the pan sharpened image: the Brovey transformation; the Esri pan sharpening transformation; the Gram-Schmidt spectral sharpening method; the intensity, hue, saturation (IHS) transformation; and the simple mean transformation. Each of these methods uses different models to improve the spatial resolution while maintaining the color, and some are adjusted to include a weighting so that a fourth band can be included (such as the near-infrared band available in many multispectral image sources). By adding the weighting and enabling the infrared component, the visual quality in the output colors is improved.
The Brovey transformation is based on spectral modeling and was developed to increase the visual contrast in the high and low ends of the data's histogram. It uses a method that multiplies each resampled, multispectral pixel by the ratio of the corresponding panchromatic pixel intensity to the sum of all the multispectral intensities. It assumes that the spectral range spanned by the panchromatic image is the same as that covered by the multispectral channels.
In the Brovey transformation, the general equation uses red, green, and blue (RGB) and the panchromatic bands as inputs to output new red, green, and blue bands, for example:
Red_out = Red_in / [(blue_in + green_in + red_in) * Pan]
However, by using weights and the near-infrared band (when available), the adjusted equation for each band becomes
DNF = (P - IW * I) / (RW * R + GW * G + BW * B) Red_out = R * DNF Green_out = G * DNF Blue_out = B * DNF Infrared_out = I * DNF
where the inputs are
P = panchromatic image R = red band G = green band B = blue band I = near infrared W = weight
The Esri pan sharpening transformation uses a weighted average and the additional near-infrared band (optional) to create its pan sharpened output bands. The result of the weighted average is used to create an adjustment value (ADJ) that is then used in calculating the output values, for example:
ADJ = pan image - WA Red_out = R + ADJ Green_out = G + ADJ Blue_out = B + ADJ Near_Infrared_out = I + ADJ
The weights for the multispectral bands depend on the overlap of the spectral sensitivity curves of the multispectral bands with the panchromatic band. The weights are relative and will be normalized when they are used. The multispectral band with the largest overlap with the panchromatic band should get the largest weight. A multispectral band that does not overlap at all with the panchromatic band should get a weight of 0. By changing the near-infrared weight value, the green output can be made more or less vibrant.
The Gram-Schmidt pan sharpening method is based on a general algorithm for vector orthogonalization—the Gram-Schmidt orthogonalization. This algorithm takes in vectors (for example, three vectors in 3D space) that are not orthogonal, and then rotates them so that they are orthogonal afterward. In the case of images, each band (panchromatic, red, green, blue, and infrared) corresponds to one high-dimensional vector (number of dimensions equals number of pixels).
In the Gram-Schmidt pan sharpening method, the first step is to create a low-resolution pan band by computing a weighted average of the MS bands. Next, these bands are decorrelated using the Gram-Schmidt orthogonalization algorithm, treating each band as one multidimensional vector. The simulated low-resolution pan band is used as the first vector, which is not rotated or transformed. The low-resolution pan band is then replaced by the high-resolution pan band, and all bands are back-transformed in high resolution.
Some suggested weights for common sensors are as follows (red, green, blue, and infrared, respectively):
- GeoEye—0.6, 0.85, 0.75, 0.3
- IKONOS—0.85, 0.65, 0.35, 0.9
- QuickBird—0.85, 0.7, 0.35, 1.0
- WorldView-2—0.95, 0.7, 0.5, 1.0
The details for this technique are described in the following patent:
Laben, Craig A., and Bernard V. Brower. Process for Enhancing the Spatial Resolution of Multispectral Imagery using Pan-Sharpening. US Patent 6,011,875, filed April 29, 1998, and issued January 4, 2000.
The IHS pan sharpening method converts the multispectral image from RGB to intensity, hue, and saturation. The low-resolution intensity is replaced with the high-resolution panchromatic image. If the multispectral image contains an infrared band, it is taken into account by subtracting it using a weighting factor. The equation used to derive the altered intensity value is as follows:
Intensity = P - I * IW
Then the image is back-transformed from IHS to RGB in the higher resolution.
The simple mean transformation method applies a simple mean averaging equation to each of the output band combinations, for example:
Red_out= 0.5 * (Red_in + Pan_in) Green_out = 0.5 * (Green_in + Pan_in) Blue_out= 0.5 * (Blue_in + Pan_in)