Reality mapping in ArcGIS Pro

Available with Standard or Advanced license.

Available for an ArcGIS organization with the ArcGIS Reality license.

ArcGIS Reality for ArcGIS Pro is an ArcGIS Pro extension that expands ortho mapping capabilities with high-fidelity product generation. Using drone and digital aerial imagery, you can create full-resolution DSMs, True Orthos, 2D DSM meshes, dense 3D point clouds, and photo-realistic 3D meshes. From satellite imagery, a DSM and 2D DSM meshes can be generated.. Wizards guide you through photogrammetric workflows to create workspaces for your imagery type, perform block adjustment, and product generation.

In ArcGIS Pro, you can photogrammetrically correct imagery to remove geometric distortion induced by sensor, platform, and terrain displacement; edge match; and color balance the resulting orthoimagery. By creating a Reality mapping workspace from drone or aerial digital sensor data types, the following derived products can be created:

  • True Ortho stored in a file format such as .tif or .crf
  • Digital surface model (DSM) stored as .tif or .crf files
  • 2D DSM mesh stored as .slpk, and .obj files and 3D tiles
  • 3D point cloud stored as .las files
  • 3D mesh stored as .slpk, and .obj files and 3D tiles

By creating a Reality mapping workspace from drone or aerial digital sensor data types, the following derived products can be created:

  • Digital surface models (DSM) stored as .tif or .crf files
  • 2D DSM mesh stored as .slpk, .obj files and 3D tiles

Reality mapping products
Products generated in ArcGIS Reality for ArcGIS Pro. Imagery courtesy of LeadAir, Inc.

In addition to creating derived products, you can use the orthorectified mosaic dataset to support additional processes or share it as a dynamic image service or a cached image service.

Get started with ArcGIS Pro Reality

Reality mapping with ArcGIS Pro requires three main steps:

  • Create a Reality mapping workspace.
  • Perform a block adjustment to correct geometric distortions in the imagery.
  • Generate Reality mapping products.

Reality mapping project overview

Create a Reality mapping workspace

First, create a project. Then create a Reality mapping workspace, which is a subproject in the ArcGIS Pro project. The workspace manages all the Reality mapping resources and opens a map view with a Reality Mapping tab and a Reality Mapping view in the Contents pane for the Reality mapping workflow. The Reality Mapping tab provides tools and wizards for bundled block adjustment and the generation of Reality mapping products. The Reality Mapping view in the Contents pane manages and visualizes the layers of the related data in the Reality mapping workspace.

The Reality mapping workspace, and the data stored in it, depend on the source data. You can create a Reality mapping workspace for digital aerial, drone, and satellite imagery. In the Reality Mapping wizard, you can provide a folder containing images, or you can use an existing mosaic dataset.

When the Reality mapping workspace is created, a Reality Mapping category is created in the Catalog pane. Expand this to see the Reality mapping workspace you created, which includes an Imagery folder containing the source imagery and a Products folder where the Reality mapping products are stored.

The Contents pane contains a variety of layers associated with the Reality mapping workspace, such as Control Points, Solution Points, and Data Products. These layers are empty until you initiate the proper steps in the Reality mapping workflow. For example, the Control Points layer is populated after you perform a block adjustment, and the Data Products layer is populated after you complete the Reality mapping product generation steps.

Perform a block adjustment

After creating the Reality mapping workspace, you must perform a block adjustment using the tools in the Adjust and Refine groups. A block adjustment is a technique used in photogrammetry in which a transformation is computed for an area (a block) based on the photogrammetric relationship between overlapping images, ground control points (GCPs), a camera model, and elevation data.

Applying a block adjustment is an important step in the orthorectification process, and the quality of the Reality mapping products depends on the accuracy of tie points and GCPs used in the adjustment. Overlapping imagery is required, with best results produced with 80 percent overlap in a strip of imagery and 80 percent overlap between strips of imagery. The table below lists the required overlap percentages for various terrain types.

Overlap: Forward/LateralRecommendationLandscape scenario

60/30

This is the minimum overlap required.

Flat or open terrain

80/30

This is the recommended minimum overlap.

Flat or open terrain

80/60

This reduces occlusion between strips.

Built-up areas

80/80

This eliminates occlusion between strips.

Built-up areas with high-rise structures

The block adjustment is computed and applied to imagery using the following data and techniques:

  • Tie points—Minimize the misalignment between images by tying overlapping images to each other based on coincident image features. These features, or tie points, are derived using automatic image matching techniques.
  • GCPs—Georeference the images to the ground using ground reference data. GCPs are often collected using ground survey techniques, and these survey points must be visible in the source imagery. Alternatively, secondary GCPs can be derived from an existing orthoimage basemap.
  • Triangulation—Compute the image-to-map projection transformation by minimizing and distributing the errors between control points and images.

The block adjustment tools are available in the Adjust and Refine groups on the Reality Mapping tab. You can modify the adjustment options, run the Adjust tool, add GCPs, and edit tie points. To import, modify, or delete GCPs, use the Manage GCPs tools. To edit or add tie points, use the Manage Tie Points tools.

Block adjustment tools

Note:

The control points, solution points, block adjustment results, and other adjustment data are stored in the workspace for the project. This information is accessed by the relevant tools as you progress through the workflow to produce Reality mapping products. For example, the adjustment transform associated with each image is used when generating the True Ortho output.

Accuracy assessment

Once the block adjustment is performed, you can assess accuracy by reviewing the GCP residuals in the GCP Manager table. GCP residual information is listed in the dX, dY, and dZ fields, which represent deviations of the measured positions from their true ground coordinates in x, y, and z directions. You can sort residuals in ascending or descending order by clicking the field title. Higher-than-expected residual values typically indicate an error in the surveyed ground coordinate, the recorded coordinate, or the measured image position. It is recommended that you review, remeasure, and readjust the measured positions of GCPs with high residuals to achieve acceptable accuracy. If no improvement in the residual is observed, you can change the point status by right-clicking the GCP label and clicking Check Point. Rerun the adjustment after changing the point status to incorporate the change in the adjustment process.

Similar to GCPs, check points are points with known ground coordinates and measure features that are visible in multiple overlapping images. Check points are used to measure the accuracy of the adjustment results, rather than to control the block adjustment process. For each check point, the distance between its known ground location and the location of the corresponding pixel after the adjustment process is used to calculate the RMS error. To further refine adjustment accuracy, add or remove GCPs or modify the adjustment options. Once the adjustment results are within the project accuracy requirements, derived products can be generated. The overall accuracy of the adjusted block is provided in the GCP Manager table. Additional accuracy statistics are also provided in the adjustment report.

Generate Reality mapping products

Once the imagery is adjusted, you can generate Reality mapping products using the tools in the Product group. You can create a DSM, True Ortho, DSM Mesh, Point Cloud, or 3D Mesh product. Each product generation tool opens a wizard that guides you through the process of creating the specified Reality mapping product.

Product generation tools

  • Multiple Products—The Multiple Products wizard guides you through the workflow to create a single or multiple Reality mapping products in a single process. All available products include DSM, True Ortho, DSM Mesh, Point Cloud, and high-fidelity 3D Mesh. You can adjust parameters such as format and output type for each product. If 3D products such as Point Cloud or 3D Mesh are required, it is recommended that you do not adjust the quality and pixel size settings, as changing them may adversely impact product quality. All derived products are stored in the appropriate folders under the Reality Mapping category in the Catalog pane.
  • DSM—A DSM is a first surface elevation product that includes the elevation of objects on the surface such as trees and buildings. Do not use it for image orthorectification unless the source imagery either is nadir-looking or does not contain above-ground features. A DSM is derived from overlapping pairs of airborne or satellite imagery using photogrammetric methods.

    The DSM wizard guides you through the elevation generation process. Parameters such as quality and pixel size are set in the Shared Advanced Settings section of the wizard and are automatically defined by the system based on the sensor type being processed and other parameters. It is recommended that you do not change default pixel size and quality settings as it may adversely impact processing duration or the quality of the 3D products generated. Following product generation, the DSM product is stored in the DEMs folder under the Reality Mapping category in the Catalog pane.

  • True Ortho—True Ortho refers to an orthorectified image in which surface and above-surface elements are orthogonally projected and does not contain any building or feature lean. In the True Ortho wizard, you set parameters that define how the True Ortho product will be created. To create a True Ortho image, a DSM derived from the adjusted block of images is required. As a result, a DSM is generated as a part of the True Ortho process even if a DSM was not previously selected as a product. The output True Ortho image is stored in the Orthos folder under the Reality Mapping category in the Catalog pane.
  • DSM Mesh—A DSM mesh is a 2.5D textured model of the project area in which the adjusted images are draped on a triangulated irregular network (TIN) version of the DSM extracted from the overlapping images in the adjusted block. The DSM Mesh wizard simplifies the creation of the DSM Mesh product by providing a streamlined workflow with preconfigured parameters for creating an acceptable output. The DSM Mesh product is stored in the Meshes folder under the Reality Mapping category in the Catalog pane.
  • Point Cloud—A point cloud is a model of the project area defined by high-density, RGB colorized 3D points that are extracted from the overlapping images in the block. The Point Cloud window enables the generation a high-density point cloud for the project area. The generated point cloud is stored in the Point Clouds folder under the Reality Mapping category in the Catalog pane.
  • 3D Mesh—A 3D mesh is 3D textured model of the project area in which the ground and above-feature facades are densely and accurately reconstructed. The 3D mesh can be viewed from any angle to get a realistically accurate depiction of the project area. The 3D Mesh wizard simplifies the 3D mesh workflow with predefined, sensor-based parameters for high-quality mesh generation. The generated 3D Mesh is stored in the Meshes folder under the Reality Mapping category in the Catalog pane.

Default parameters for the creation of each derived product fall into two categories: global settings and local product specific settings.

  • Global settings—These settings apply to all Reality mapping products. To access Global settings, click the Shared Advanced Setting option in the various product wizards.
  • Local settings—Local settings are accessed in the product wizards and are product specific, such as Output Type, Format, and Resampling.

Shared Advanced Settings option

An important aspect of the Reality mapping product generation process is parameter definition. Various parameters can be set to define the extent, density, and resolution of the product to be generated. The Shared Advanced Settings option, accessed from the product wizards, displays the Advance Product Settings window, which contains parameters that impact the quality of all products generated in a Reality mapping session. The Advance Product Settings window is divided into two sections, General and 2D Products.

General

The General settings are where you define the quality, project, and product characteristics. These settings impact both 2D and 3D product generation. Each setting, including recommendations, is described below.

  • Quality—Relates to the resolution of the image that will be used by the app to create the derived products. The resolution of the input image impacts the density and resolution of the point cloud and mesh products to be created. The quality setting is automatically set by the Reality mapping app using information such as sensor type, scenario setting, and percentage overlap. The Quality setting options are listed below.
    • Ultra—Produces products with the highest density and maximum resolution. This setting is primarily used when processing standard digital aerial projects. Processing imagery using this option is the most resource-intensive and time consuming. As a result, it is recommended that you use a workstation dedicated to image processing, not a shared resource or one used for other tasks.
    • High—Produces products down-sampled to a pixel size of two times the source resolution. The density of the point cloud generated decreases approximately by a factor of four for digital aerial imagery, and 2.5 decrease for drone imagery, when compared to the Ultra option. Processing duration and derived products storage requirements are less compared to the Ultra option.
  • Scenario—Sets the flight configuration of the images to be processed. The Scenario setting options are listed below.
    • Drone—Recommended for drone imagery projects that may contain a combination of nadir and oblique imagery, as well as drone imagery data.
    • Nadir—Recommended for images captured with the optical axis of the sensor being perpendicular to the ground. The Nadir option is recommended when processing standard digital aerial images captured using a nadir flight configuration. If the aerial images to be processed are composed of both nadir and oblique imagery, it is recommended that you process the nadir images separately when generating derived products. This ensures optimum processing times.
    • Oblique—Recommended for processing imagery captured with an inclined optical axis. It is used to support the generation of 3D products such as point clouds and 3D meshes.
  • Pixel Size—Relates to the cell size of raster products to be generated such as True Ortho and DSM. This setting does not impact the density or resolution of the point cloud or mesh products. Derived product pixel size can be manually set or automatically determined by the application.

    Auto determination is the default setting. The auto-determined product pixel size is based on various factors such as the source image resolution, quality and scenario settings, and is set to achieve optimal performance and product quality. It is recommended that you only change the pixel size value when a specific product pixel size is needed to satisfy project requirements. To manually set the output product pixel size, complete the following steps:

    • Uncheck the Auto pixel size check box.
    • Choose Meter from the Pixel Size drop-down list.
    • Type the required pixel size value in the input box.

    Note:

    To create a lower-resolution product, use the Quality setting rather than the Pixel Size setting. Reducing the quality options from Ultra to a lower option automatically down-samples (doubles) the pixel size with each quality option chosen. Using this approach lowers the generated point density while linearly scaling the precision in depth, ensuring optimal performance and a faster workflow.

    The tables below show the potential impact of various Quality and Scenario options on creating derived products, at different resolutions, for different types of sensors.

    Product quality and processing performance settings for drone imagery

    Product quality and processing performance settings for digital aerial imagery

    The tables above show relative comparisons of different product settings on quality and performance. The Processing Duration relative comparison will differ from actual processing performance based on your computer resources, system setup, networking, and dataset characteristics such as size, number of bands, storage, and other considerations.

  • Product Boundary—Inputs a shapefile to define the extent of the output product and reduce processing duration. It is recommended that you input a project boundary before creating derived products.
  • Correction Features—Inputs a shapefile used to correct errors in the DSM, such as regularizing building shapes. This improves the output quality of the True Ortho and mesh products.
  • Waterbody Features—Inputs 3D shapefiles used to hydrologically constrain features, such as lakes and wide rivers, that fall within the project area. If water features exist within the project area, add water body features to improve the quality of derived products.

    In the absence of a water body features shapefile, lake surfaces may appear highly irregular and adversely impact product quality. It’s important that the height of water body features closely match the terrain height of the features being constrained in the project. Discrepancies in height will introduce distortions such as image stretching in the output product. Accurate water body feature polygons can be derived using the Stereo Mapping app inArcGIS Image Analyst.

2D Products

The 2D products options list various DSM quality assurance layers that can be exported as .tif rasters along with the DSM product.

  • Export Binary Mask Image for Non-Interpolated Pixels—In the mask, NoData and interpolated pixels are represented in a dark color while all other pixels with height measurements are depicted in a light color.
  • Export Distance Map to Next Non-Interpolated Pixels—Generate and export a raster in which the Euclidian distance to the closest pixel containing a height value is assigned to pixels that do not contain a measurement.
  • Export Map with Stereo Model Count of Final Point—Generate and export a raster that stores the number of stereo models from which the DSM was derived.

Related topics