An oriented imagery dataset is created in a geodatabase to manage a collection of oriented images. The dataset defines both collection-wide properties, such as the elevation source, as well as image-specific metadata, such as the camera location and orientation.
When added to a map, it is visualized as an oriented imagery layer.
Create and publish an oriented imagery dataset
Use the following geoprocessing tools in the Oriented imagery toolbox to author an oriented imagery dataset:
- Create an Oriented Imagery Dataset creates an empty oriented imagery dataset in a geodatabase.
- Add Images to Oriented Imagery Dataset populates the oriented imagery dataset with images and corresponding metadata. Input sources can be a file, folder, table, list of image paths, or a point feature layer. If the input source is a file, folder, or list of image paths, the tool reads image metadata directly from the EXIF and XMP metadata in .jpeg files.
- Build Oriented Imagery Footprint is an optional tool that will generate an additional feature layer as reference to show the areas on the map that appear in the images in the oriented imagery dataset.
You can then publish an oriented imagery layer (and optionally the oriented imagery footprint) to ArcGIS Online or ArcGIS Enterprise 11.2 using the standard sharing workflow. To include the oriented imagery footprint layer when you publish, select both the oriented imagery footprint layer and oriented imagery layer before selecting Share As Web Layer.
Image formats and storage
The oriented imagery dataset stores the image location path in its attribute table. The images can be in local storage or network storage, or they can be in publicly accessible cloud storage. The oriented imagery dataset supports JPG, JPEG, TIF image formats. If the images are in cloud storage, the MRF image format is also supported.
If you plan to publish an oriented imagery dataset to ArcGIS Online or ArcGIS Enterprise, the images must be in publicly accessible cloud storage.
Camera position and orientation
The Shape field in the attribute table defines the location of the camera in the dataset coordinate system. The camera orientation is described in terms of Camera Heading, Camera Pitch, and Camera Roll. These angles describe the camera orientation relative to a local projected coordinate system and refer to the point between the camera position and a point running through the center of the image.
The camera orientations are as follows:
- The initial camera orientation is with the lens aimed at nadir (negative z-axis), with the top of the camera (columns of pixels) pointed north and rows of pixels in the sensor aligned with the x-axis of the coordinate system.
- The first rotation (Camera Heading) is around the z-axis (optical axis of the lens), positive rotations clockwise (left-hand rule) from north.
- The second rotation (Camera Pitch) is around the x-axis of the camera (rows of pixels), positive counterclockwise (right-hand rule) starting at nadir.
- The final rotation (Camera Roll) is a second rotation around the z-axis of the camera, positive clockwise (left-hand rule).
Assuming you are standing at the camera location looking north, rotate (heading) clockwise, then tilt the camera up (pitch), and then turn along the axis of the camera (roll) to point in the specified direction.
See the following example orientations:
- The camera pointing down with the rows of pixels going from west to east would have the orientation 0,0,0.
- Rotating the camera 90 degrees so that the pixels go from north to south would be 90,0,0.
- Rotating the camera to the horizon have the orientation 90,90,0.
- Rotating the camera counterclockwise by 20 degrees would result in an orientation of 90,90,20.
In most applications, the roll angle is 0. The roll angle is used to indicate that the camera body is rotated around the lens axis and is required to determine the correct pixel-to-image relationship.
In some cases, the image is rotated with respect to the camera. Consider taking a picture with most digital cameras or mobile phones, even if you rotate the camera the resulting image is up at the top of the image. This is handled by the Image Rotation field that clarifies an additional rotation with respect to the camera. The horizontal field of view (HFOV) and vertical field of view (VFOV) should be that of the camera and should not change based on the roll angle.
Oriented imagery categories
The imagery category is used to specify the type of images that are added to the dataset and define the default oriented imagery properties of the dataset. These properties can be later changed using the Update Oriented Imagery Dataset Properties tool. The following are different categories and associated properties:
- Horizontal—Images where the exposure is parallel to the ground and looking to the horizon.
- Oblique—Images where the exposure is at an angle to the ground, typically at about 45 degrees so sides of objects can be seen.
- Nadir—Images where the exposure is perpendicular to the ground and looking straight down. Only the top of the objects can be seen.
- 360—Images taken using specialized cameras that provide 360 spherical surround views.
- Inspection—Close-up imagery of assets (less than 5 meters from camera location).
|Camera pitch (degrees)
|Camera roll (degrees)
|Camera height (m)
|Near distance (m)
|Far distance (m)
|Maximum distance (m)
The oriented imagery viewer in ArcGIS Pro 3.2 does not support visualizing 360-degree imagery.
Oriented imagery attribute table
An attribute table is created when you create an oriented imagery dataset; some fields always appear by default. The fields are populated when images are added, and more fields can be added to contain specific metadata information. The metadata provides efficient search capability that allows you to quickly find and display images covering a site of interest and so includes a number of approximations. An optional CameraOrientation field is provided for improving the image-to-ground and ground-to-image transformations and also supports image orientations that are defined using omega, phi, kappa; yaw, pitch, roll; and a local tangent plane.
The attribute table supports the following fields:
- ObjectID—The ObjectID field is maintained by ArcGIS and provides a unique ID for each row in a table.
- Shape—Defined location of the camera.
- Name (optional)—An alias name to identify the image.
- ImagePath—The path to the image file. This can be a local path or a web accessible URL. Images can be of JPEG, JPG, or TIF format. For images stored in cloud, MRF formats are also supported.
- AcquisitionDate (optional)—The date when the image was collected. The time of the image collection can also be included.
- CameraHeading (optional)—The camera orientation of the first rotation around the z-axis of the camera. The value is in degrees. The heading values are measured in the positive clockwise direction where north is defined as 0 degrees. -999 is used when the orientation is unknown.
- CameraPitch (optional)—The camera orientation of the second rotation around x-axis of the camera in the positive counterclockwise direction. The value is in degrees. The pitch is 0 degrees when the camera is facing straight down to ground. The valid range of pitch value is from 0 to 180 degrees, with 180 degrees for a camera facing straight up and 90 degrees for a camera facing horizon.
- CameraRoll (optional)—The camera orientation of the final rotation around z-axis of the camera in the positive clockwise direction. The value is in degrees. Valid values range from -90 to 90.
- CameraHeight (optional)—The height of the camera above the ground (elevation source). The units are in meters. Camera height is used to determine the visible extent of the image; large values will result in a greater view extent. Values should not be less than 0.
- HorizontalFieldOfView (optional)—The camera’s scope in a horizontal direction. The units are in degrees and valid values range from 0 to 360.
- VerticalFieldOfView (optional)—The camera’s scope in the vertical direction. The units are in degrees and valid values range from 0 to 180.
- NearDistance (optional)—The nearest usable distance of the imagery from the camera position. The units are in meters.
- FarDistance (optional)—The farthest usable distance of the imagery from the camera position. FarDistance is used to determine the extent of the image footprint, which is used to determine if an image is returned when you click the map, and for creating optional footprint features. The units are in meters. Far distance should be always greater than 0.
- OrientedImageryType (optional)—Defines the imagery type:
- ImageRotation (optional)—The orientation of the camera in degrees relative to the scene when the image was captured. The rotation is added in addition to CameraRoll. The value can range from -360 to 360.
- CameraOrientation (optional)—Stores detailed camera orientation parameters as a pipe-separated string. The field provides support for more accurate image-to-ground and ground-to-image transformations.
- ElevationSource (optional)—The source of elevation as a JSON string, that will be used to compute ground-to-image transformations. The elevation source can be a digital elevation model (DEM) or a constant value. A dynamic image service or a tile image service can be used as the digital elevation model. The VerticalMeasurementUnit value will be used as the unit for constant elevation.
For example, if a DEM is used, the following is the elevation source:
A rasterFunction can be provided if the DEM is a dynamic image service and level of detail if the DEM is tile image service.
If constant elevation is used, the following is the elevation source: