Adjustment options for ortho mapping aerial imagery

Available with Advanced license.

The parameters used in computing the block adjustment are defined in the Adjust window. The available adjustment options depend on the type of workspace you defined when you set up your ortho mapping project. For example, frame triangulation is performed for aerial images.

The block adjustment parameters for digital aerial imagery are described below. These parameters are used when computing tie points or ground control points (GCPs) and when computing block adjustment.

For information about adjustment options for drone or scanned imagery, see Adjustment options for ortho mapping drone imagery or Adjustment options for ortho mapping scanned imagery, respectively.

Perform Camera Calibration

Automatic camera calibration computes and improves the camera’s geometric parameters, including interior orientation and lens distortion, while determining image orientation and image ground coordinates. If the camera has not been calibrated, select the options below to improve the overall quality and accuracy of bundle block adjustment. You can calibrate your camera during block adjustment to improve the camera's parameter accuracy. Most high-quality digital cameras have been calibrated, in which case these options should not be checked. This is the default.

  • Focal Length—Refines the focal length of the camera lens
  • Principal Point—Refines the principal point of the autocollimation
  • K1,K2,K3—Refines the radial distortion coefficients
  • P1,P2—Refines the tangential distortion coefficients

For more information on calibration options, see Cameras table schema.

Note:
Higher in-strip and cross-strip aerial image overlap is recommended for better block adjustment and product generation results.

Camera calibration is performed during block adjustment to improve the camera parameter accuracy. For camera calibration, the image collection must have an in-strip overlap of 60 percent or more, and a cross-strip overlap of 30 percent or more.

Advanced Options

The Advanced Options section provides additional settings that can be used to optimize the adjustment process. A description of each setting is given below.

Prior Accuracy Settings

The prior accuracy setting allows users to specify the accuracy of orientation data from Position and Orientation Systems (POS) such as Applanix. Different POS may provide different accuracy information. For example, one POS may only output positional accuracy, while another may provide accuracy in X, Y, and Z, respectively. As a result, the user only needs to input the accuracy information that is available. The default setting for this category is null.

Digital airborne platforms measure exterior orientation using a Position Orientation System (POS). You can enter the measured accuracy of these parameters to improve the quality of the adjustment.

Image Location (Meter)Description

X

The accuracy of the x-coordinate provided by the airborne POS. The units must match PerspectiveX.

Y

The accuracy of the y-coordinate provided by the airborne POS. The units must match PerspectiveY.

Z

The accuracy of the z-coordinate provided by the airborne POS. The units must match PerspectiveZ.

Planar

The accuracy of the x-y coordinate provided by the airborne POS. The units must match PerspectiveX or PerspectiveY.

Position

The accuracy of the x-y-z coordinate provided by the airborne POS. The units must match PerspectiveX, PerspectiveY, or PerspectiveZ.

Attitude (Degrees)Description

Omega

The accuracy of the Omega angle provided by the airborne POS. The units are in decimal degrees.

Phi

The accuracy of the Phi angle provided by the airborne POS. The units are in decimal degrees.

Kappa

The accuracy of the Kappa angle provided by the airborne POS. The units are in decimal degrees.

GNSS Setting

Global Navigation Satellite System (GNSS) settings provide options for users to calibrate the offset between the GPS antenna, camera, and GPS signal global shift in bundle adjustment. GNSS settings require the inclusion of GCPs in the adjustment.

For some airborne acquisitions, the GPS antenna is located separately from the camera system. The following options allow you to correct offsets in position measurements due to the physical offset between the camera and GPS antenna:

  • Compute Antenna Offset—Corrects errors in sensor position by computing the physical offset between a camera and airborne GPS antenna.
  • Compute Shift—Corrects for instrumental drift in the GPS signal.

Compute posterior standard deviation for images and solution points

The following options enable users to compute the standard deviation for each image exterior orientation parameters and solution point coordinates.

  • Compute Posterior Standard Deviation for Images—The posterior standard deviation of solution points after adjustment will be computed. The computed standard deviation values will be stored in the Solution table.
  • Compute Posterior Standard Deviation for Solution Points—The posterior standard deviation of each image location and orientation after adjustment will be computed. The computed standard deviation values will be stored in the Solution Points table.

Reproject Tie Points

A part of the adjustment process includes computing and displaying each tie point at its correct 2D map location. This is an optional step that only supports the visual analysis of tie points with the 2D map view. Following adjustment, the Reproject Tie Points option in the Manage Tie Points drop-down menu must be used.

Note:

When working with large projects with more than 1,000 images, this step can be skipped to reduce adjustment processing duration, without any adverse impact to the adjustment quality.

Process A Rig Camera

Rig camera refers to standard aerial sensors that are comprised of multiple cameras. This option indicates that in adjustment, the multiple-camera system is processed as one rigid body. This option should be checked if the images being processed were collected using this type of aerial sensor.

Tie Point Matching

Tie points are points that represent common objects or locations within the overlap areas between adjacent images. These points are used to improve geometric accuracy in the block adjustment. The Tie Point Matching category in the Adjust tool includes options to support the automatic computation of tie points from overlapping images. Check the Full Frame Pairwise Matching check box to enable the automatic computation of tie points. The following conditions must be met for optimal results:

  • Topography imaged is highly variable, for example, hilly terrain with large variations in height.
  • Forward and lateral overlap percentages between images are lower than the recommended value.
  • The accuracy of the initial imagery orientation parameters and projection center coordinates are low.
  • Images have high oblique angles.

Full Frame Pairwise Matching

This option uses the Scale-Invariant Feature Transform (SIFT) algorithm to improve correlation accuracy when processing imagery with a high variability in scale, overlap or low quality initial orientation parameters.

Image Location Accuracy

The inherent positional accuracy of the imagery depends on the sensor viewing geometry, type of sensor, and level of processing. Positional accuracy is typically described as part of the imagery deliverable. Choose the keyword that best describes the accuracy of the imagery.

The values consist of three settings that are used in the tie point calculation algorithm to determine the number of images in the neighborhood to use. For example, when the accuracy is set to High, the algorithm uses a smaller neighborhood to identify matching features in the overlapping images.

SettingDescription

Low

Images have poor location accuracy and large errors in sensor orientation (rotation of more than 5 degrees). The scale invariant feature transform (SIFT) algorithm is used, which has a large pixel search range to support point matching computation.

Medium

Images have moderate location accuracy and small errors in sensor orientation (rotation of less than 5 degrees). The Harris algorithm is used with a search range of approximately 800 pixels to support the point matching computation. This is the default setting.

High

Images have high location accuracy and small errors in sensor orientation. This setting is suitable for satellite imagery and aerial imagery that has been provided with exterior orientation data. The Harris algorithm is used with a small search range to support point matching computation.

Tie Point Similarity

Choose the tolerance level for matching tie points between image pairs.

SettingDescription

Low

The similarity tolerance for the matching imagery pairs is low. This setting produces the most matching pairs, but some of the matches may have a higher level of error.

Medium

The similarity tolerance for the matching pairs is medium. This is the default setting.

High

The similarity tolerance for the matching pairs is high. This setting produces the least number of matching pairs, but each matching pair has a lower level of error.

Tie Point Density

Choose the relative number of tie points to be computed between image pairs.

SettingDescription

Low

The fewest number of tie points is produced.

Medium

An intermediate number of tie points is produced. This is the default setting.

High

A high number of tie points is produced.

Tie Point Distribution

Choose whether the output points have a regular or random distribution.

  • Random—Points are generated randomly. Randomly generated points are better for overlapping areas with irregular shapes. This is the default setting.
  • Regular—Points are generated based on a fixed pattern.

Mask Polygon Features

Use a polygon feature class to exclude areas you do not want used when computing tie points.

In the attribute table of the feature class, the mask field controls the inclusion or exclusion of areas for computing tie points. A value of 1 indicates that the areas defined by the polygons (inside) are excluded from the computation. A value of 2 indicates that the areas defined by the polygons (inside) are included in the computation and areas outside of the polygons are excluded.

Related topics