Available with Standard or Advanced license.
Available for an ArcGIS organization with the ArcGIS Reality license.
Create a Reality mapping workspace to manage and process products using digital aerial imagery.
When you have imagery from a camera system with metadata that provides accurate interior and exterior orientation information, you can use this workflow. Computing the photogrammetric solution for an aerial image is determined by its exterior orientation, which represents a transformation from the ground to the camera, and its interior orientation, which represents a transformation from the camera to the image.
Note:
Most aerial camera systems provide imaging platform data in the form of longitude, latitude, and flying height (x,y,z) using airborne GPS data, and orientation data in the form of Omega, Phi, and Kappa using an inertial measurement unit (IMU). This data is provided for each image collected by the airborne sensor and stored either in the header of the image or in a separate metadata file.
The Reality Mapping Workspace wizard guides you through the creation of a Reality mapping workspace for digital aerial imagery. Preprocessed or adjusted aerial imagery can be incorporated using supported raster types such as Applanix, SOCET SET, ISAT, and Match-AT. Unadjusted imagery requires the following information to support the workspace creation process:
- Camera table—The camera model and resulting interior orientation
- Frame table—The initial exterior orientation parameters for each image in the project
Data requirements
The following data is required to create a workspace for digital aerial imagery:
- Camera table—Includes measurements of sensor characteristics, such as focal length, size and shape of the imaging plane, pixel size, and lens distortion parameters. In photogrammetry, the measurement of these parameters is called interior orientation (IO), and they are encapsulated in a camera model file. High-precision aerial mapping cameras are analyzed to provide camera calibration information in a report used to compute a camera model. Other consumer-grade cameras are calibrated by the camera manufacturer or those operating the cameras, or they can be calibrated during the adjustment processes. See Use the Build Frames & Cameras Tables tool, Frames table schema, and Cameras table schema for more information.
- Frames table—Describes the position of the sensor at the instant of image capture in coordinates such as latitude, longitude, and height (x,y,z), as well as the attitude of the sensor, expressed as Omega, Phi, and Kappa (pitch, roll, heading). The measurement of these parameters is referred to as exterior orientation (EO) and should be provided with the imagery.
- DEM—Provides an initial height reference for computing the block adjustment. The global digital elevation model (DEM) is used by default. For relatively flat terrain, you can specify an average elevation or z-value.
- Positioning and Orientation System (POS) file (optional)—The POS file contains imaging platform GPS and IMU metadata for each image, including—but not limited to—image name, longitude, latitude, flying height, Omega, Phi, and Kappa parameters. This information can be parsed by the workspace creation process to extract the frames information needed to support image use. Since the POS file does not provide the required camera information, the vendor-supplied camera calibration report is required to support the creation of the camera table.
Create a Reality mapping workspace
To create a digital aerial imagery workspace for a project using the workflow wizard, complete the following steps:
- On the Imagery tab, click New Workspace.
- On the Workspace Configuration page, provide a name for the workspace.
- Ensure that Workspace Type is set to Reality Mapping.
- From the Sensor Data Type drop-down list, choose Aerial - Digital.
The Scenario Type and overlap information is automatically updated by the system.
- Set the Scenario Type option to Oblique if you're working with oblique imagery or a combination of oblique and nadir imagery.
- The most common type is Nadir.
- To create a digital surface model (DSM), digital terrain model (DTM), True Ortho, or DSM Mesh, use nadir imagery with Scenario Type set to Nadir.
- To create a Point Cloud and 3D Mesh, use oblique imagery, or a combination of oblique and nadir imagery, with Scenario Type set to Oblique.
- Adjust the overlap percentages if required, or accept the default values.
- Optionally, set the Parallel Processing Factor value of the workspace. The default value of 50% means that half of the total CPU cores will be used to support Reality mapping processing.
- Optionally, check the Track adjustment restore points check box to be able to revert your workspace to a previous state.
- Optionally, check the Import and use existing image collection check box to import and use an existing mosaic dataset. See Create a Reality mapping workspace from a mosaic dataset for more information.
- Accept all the other default values and click Next.
- On the Image Collection page, select Generic Frame Camera for Sensor Type.
- To specify the Exterior Orientation File/Esri Frames Table value, click the Browse button
, and browse to and select the frames table that is associated with the project.
The frames table allows you to specify parameters that compute the exterior orientation of the imagery. It is a .csv file generated by the Build Frames & Cameras Tables tool.
If you input an exterior orientation file that is not an Esri frames table, such as a POS file, the Frames page appears so you can input field mapping information.
The Spatial Reference parameter value is automatically set by the spatial reference of the imagery perspective center coordinates defined in the Esri frames table.
- If the Spatial Reference parameter is not specified, click Spatial Reference
and set the spatial reference to the same coordinate system as that of the imagery perspective center coordinates.
The spatial reference of the imagery perspective center coordinates is usually supplied by the imagery provider.
- Specify the Cameras table file.
This is the .csv file that contains the camera configuration information, generated using the Build Frames & Cameras Tables tool.
If you use the Add button
to add a camera or the Import button
to import a camera file that does not conform to the camera table schema generated by the Build Frames & Cameras Tables tool, the Add New Camera page appears where you can enter the camera information. click the Calibration tab on the Add New Camera page to provide the camera information, which is typically available from the manufacturer.
- On the Distortion tab, provide the camera distortion information if available.
This type of information is often provided in the camera calibration report when the mapping camera is calibrated. You can use the Export button
to store the camera calibration parameters as an Esri cameras table for future use.
- When you're finished, click Next.
- On the Data Loader Options page, define the output workspace characteristics.
- Choose an Elevation Source value. Creating a Reality mapping workspace from aerial imagery requires
elevation data.
The DEM parameter wizard provides an elevation service with a 90-meter resolution by default; however, this only allows for coarse orthorectification. You can use a different DEM service or file by browsing to it.
- If you have internet access, use the default elevation service for the DEM parameter, and choose Average Elevation from DEM for the Elevation Source value.
- If you do not have internet access, provide a DEM file covering the project area, and choose Average Elevation for the Elevation Source value.
- In the Advanced Options section, check the Estimate Statistics check box to estimate the statistics for the output workspace.
- Optionally, edit the Band Combination parameters to reorder the band combination from the default order.
- Choose a Pre-processing option, Calculate Statistics or Build Pyramids, to perform on the data before you create the workspace.
- Choose an Elevation Source value. Creating a Reality mapping workspace from aerial imagery requires
elevation data.
- Click Finish to create the workspace.
Once the Reality mapping workspace is created, the image collection is loaded in the workspace and displayed on the map. You can now perform adjustments and generate Reality mapping products.
Related topics
- Reality mapping in ArcGIS Pro
- Add ground control points to a Reality mapping workspace
- Manage tie points in a Reality mapping workspace
- Perform a Reality mapping block adjustment
- Generate multiple products using ArcGIS Reality for ArcGIS Pro
- Introduction to the ArcGIS Reality for ArcGIS Pro extension
- Frequently asked questions