Create a Reality mapping workspace for digital aerial imagery

Available with Standard or Advanced license.

Available for an ArcGIS organization with the ArcGIS Reality license.

Create a Reality mapping workspace to manage and process products using digital aerial imagery.

When you have imagery from a camera system with metadata that provides accurate interior and exterior orientation information, you can use this workflow. Computing the photogrammetric solution for an aerial image is determined by its exterior orientation, which represents a transformation from the ground to the camera, and its interior orientation, which represents a transformation from the camera to the image.

Note:

Most aerial camera systems provide imaging platform data in the form of longitude, latitude, and flying height (x,y,z) using airborne GPS data, and orientation data in the form of Omega, Phi, and Kappa using an inertial measurement unit (IMU). This data is provided for each image collected by the airborne sensor and stored either in the header of the image or in a separate metadata file.

The Reality Mapping Workspace wizard guides you through the creation of a Reality mapping workspace for digital aerial imagery. Preprocessed or adjusted aerial imagery can be incorporated using supported raster types such as Applanix, SOCET SET, ISAT, and Match-AT. Unadjusted imagery requires the following information to support the workspace creation process:

  • Camera table—The camera model and resulting interior orientation
  • Frame table—The initial exterior orientation parameters for each image in the project
You can also create these tables using the Reality Mapping Workspace wizard, as described in the workflow below.

Data requirements

The following data is required to create a workspace for digital aerial imagery:

  • Camera table—Includes measurements of sensor characteristics, such as focal length, size and shape of the imaging plane, pixel size, and lens distortion parameters. In photogrammetry, the measurement of these parameters is called interior orientation (IO), and they are encapsulated in a camera model file. High-precision aerial mapping cameras are analyzed to provide camera calibration information in a report used to compute a camera model. Other consumer-grade cameras are calibrated by the camera manufacturer or those operating the cameras, or they can be calibrated during the adjustment processes. See Build Frames and Cameras Tables tool, Frames table schema, and Cameras table schema for more information.
  • Frames table—Describes the position of the sensor at the instant of image capture in coordinates such as latitude, longitude, and height (x,y,z), as well as the attitude of the sensor, expressed as Omega, Phi, and Kappa (pitch, roll, heading). The measurement of these parameters is referred to as exterior orientation (EO) and should be provided with the imagery.
  • DEM—Provides an initial height reference for computing the block adjustment. The global DEM is used by default. For relatively flat terrain, you can specify an average elevation or z-value.
  • Positioning and Orientation System (POS) file (optional)—The POS file contains imaging platform GPS and IMU metadata for each image, including—but not limited to—image name, longitude, latitude, flying height, Omega, Phi, and Kappa parameters. This information can be parsed by the workspace creation process to extract the frames information needed to support image use. Since the POS file does not provide the required camera information, the vendor-supplied camera calibration report is required to support the creation of the camera table.

Create a Reality mapping workspace

You can create a digital aerial imagery workspace for a project using the workflow wizard.

  1. On the Imagery tab, click New Workspace.
  2. On the Workspace Configuration page, type a name for the workspace.
  3. Ensure that Workspace Type is set to Reality Mapping.
  4. From the Sensor Data Type drop-down list, select Aerial - Digital. The Scenario Type and overlap information is automatically updated by the system.
  5. Set the Scenario Type to Oblique if working with oblique imagery or a combination of oblique and nadir imagery.

    • The most common type is Nadir.
    • To create a DSM, True Ortho, or DSM Mesh, use nadir imagery with Scenario Type set to Nadir.
    • To create a Point Cloud and 3D Mesh, use oblique imagery, or a combination of oblique and nadir imagery, with Scenario Type set to Oblique.

  6. Adjust the overlap percentages if required, or accept the default values.
  7. Optionally, check the Allow adjustment reset check box to revert your workspace to a previous state.
  8. Accept all the other default values and click Next.
  9. On the Image Collection page, select Generic Frame Camera for Sensor Type.
    • If you have Match-AT, ISAT, Applanix, Purview MOD, DVP PAR, or SOCET SET project files managing aerial data, choose the corresponding Sensor Type setting.
  10. To specify the Exterior Orientation File/Esri Frames Table setting, click the Browse button Browse, and select the frames table that is associated with the project.

    The frames table enables specification of parameters that compute the exterior orientation of the imagery. It is a .csv file generated by the Build Frames & Cameras Tables tool.

    If you input an exterior orientation file that is not an Esri frames table, such as a POS file, the Frames page appears so you can input field mapping information.

  11. The Spatial Reference parameter is automatically set by the spatial reference of the perspective points defined in the Esri frames table. If the Spatial Reference parameter is not specified, click Spatial Reference Spatial Reference and set the spatial reference to the same coordinate system as that of the perspective points.
  12. Specify the Cameras table file. This is the .csv file that contains the camera configuration information, generated using the Build Frames & Cameras Tables tool.

    If you use the Add button Add to add a camera, or use the Import button Import to import a camera file that does not conform to the camera table schema generated by the Build Frames & Cameras Tables tool, the Add New Camera page appears so you can enter the camera information. The Calibration tab on the Add New Camera page is where you enter the camera information, which is typically available from the manufacturer.

    Use the Distortion tab to enter the camera distortion information, if available. This type of information is often provided in the camera calibration report when the mapping camera is calibrated.

    You can use the Export button Export to store the camera calibration parameters as an Esri cameras table for future use.

  13. When complete, click Next.
  14. Define the output workspace characteristics on the Data Loader Options page.
    1. Choose an Elevation Source value. Creating an Reality mapping workspace from aerial imagery requires elevation data. The DEM parameter wizard provides an elevation service with a 90-meter resolution by default; however, this only allows for coarse orthorectification. You can use a different DEM service or file by browsing to it.
      • If you have access to the internet, use the default elevation service for the DEM parameter, and choose Average Elevation from DEM for the Elevation Source option.
      • If you do not have access to the internet, provide a DEM file covering the project area, and choose Average Elevation for the Elevation Source option.
    2. Check the Estimate Statistics check box to estimate the statistics for the output workspace.
    3. Optionally, edit the Band Combination parameters to reorder the band combination from the default order.
    4. Choose a Pre-processing option, Calculate Statistics or Build Pyramids, to perform on the data before you create the workspace.
  15. Click Finish to create the workspace.

Once the Reality mapping workspace is created, the image collection is loaded in the workspace and displayed on the map. You can now perform adjustments and generate ortho products.

Related topics