Available with Advanced license.
Available with Image Analyst license.
Interactive object detection is a tool that uses a trained deep learning model to recognize objects displayed in a map or scene. Each identified feature is represented by a point feature with a location in the coordinate system of the map. Interactive object detection also stores attributes detailing the orientation and extent of the object, as well as its confidence value as a likely match. It is designed for on-demand detection of objects in the current map or scene.
To detect objects, on the Analysis tab, in the Workflows group, expand the Exploratory 3D Analysis gallery, and click Object Detection to open the Exploratory Analysis pane and activate the Object Detection tool. Choose a model to define the detection parameters, review the parameters, and select a creation method to use the tool. This topic explains how to use each creation method.
This tool requires the installation of the Deep Learning Libraries prior to being run.
License:
The interactive object detection tool requires either an ArcGIS Pro Advanced license or the ArcGIS Image Analyst extension.
The Esri Windows and Doors model relies on camera position for setting the view to detect objects. The two creation methods are as follows:
- Current Camera —Detect windows and doors by applying the current camera position. Ideally, you have navigated and positioned the camera perspective as precisely as you can.
- Reposition Camera (3D only) —Detect windows and doors in the current scene by adjusting the camera either horizontally or vertically on the viewpoint. Ideally, you have navigated and set up the view as best you can, and the camera will reposition for final refinement.
The Esri Generic Object model relies on clicking a location in the view to detect objects. The creation method includes Interactive Detection , which allows you to detect an object at the clicked location in the map or scene. Ideally, you have navigated and set up the view as best possible to display the objects clearly.
Detect using the current camera position
This is the default detection creation method for the Esri Windows and Doors model to use the current camera position. Objects are detected based on the additional parameters defined in the Exploratory Analysis pane.
Use the following steps to detect objects using the current camera position:
- Navigate using the Explore tool to set the perspective of the scene on the region of interest, where objects are to be detected.
- On the Exploratory Analysis pane, under the Create tab, select Current Camera from the Creation Method options.
- Review or optionally update the Deep Learning Model and Output parameters.
- Click Apply and allow the object detection tool perform detection and display results.
The Current Camera method remains active to continue detecting objects. You can navigate to a different area and detect objects again. This ensures that the model does not need to be reloaded and the results are returned faster. If you use a different deep learning package (.dlpk) model, it will reload.
Reposition the camera
You can also detect windows and doors in the current scene by setting a viewpoint and repositioning the camera to look at that viewpoint. Window and door objects are detected based on the parameters defined in the Exploratory Analysis pane.
This method allows you to set the camera view direction before performing the object detection. For example, set a horizontal view direction if you click a building facade where you want to detect windows. A vertical view direction is useful for a top-down camera angle, such as detecting swimming pools. The camera automatically adjusts.
Tip:
This method is not intended to reposition the view closer, so that far away objects of interest are more easily detectable. You should still manually navigate close to the object of interest. Then the camera will orient vertically or horizontally on the clicked target to detect objects.
Use the following steps to detect objects by repositioning the camera to look at the viewpoint:
- In the Exploratory Analysis pane, on the Create tab, select Reposition Camera from the Creation Method options.
- Optionally, change View Direction to either Horizontal or Vertical.
- Review or, optionally, update the Deep Learning Model and Output parameters.
- Click in the scene.
The camera moves so that the horizontal or vertical viewpoint is the clicked point. You are reasonably close to the object of interest, rather than trying to bring a far away object up close.
The object detection runs and the detected objects are added to the output feature layer.
The Reposition Camera method remains active to continue detecting objects. Click to define another viewpoint and detect objects again.
Interactive detection of individual objects
This method is only available when the deep learning model is set to the Esri Generic Object model. This method relies on individually clicking objects in a map or scene.
Use the following steps to detect objects by clicking interactively:
- In the Exploratory Analysis pane for the Object Detection tool, ensure Model is set to Esri Generic Object to enable the Interactive Detection creation method.
- Review the Output parameters for Feature Layer, Description and Symbology.
Note:
Symbology in a map is limited to the Location Point only, where the clicked location is marked with an X. Therefore the symbology parameter is disabled in a map since it cannot be configured to anything else. Bounding boxes are additional symbology options in a scene.
- Use the Explore tool to navigate and position the view to the area you are interested in detecting objects. If the Object Detection tool is active, press the C keyboard shortcut to temporarily access the Explore tool to navigate.
- Click once over the object. For example, if viewing a row of parked aircraft at a hanger, click once over each aircraft.
If the view is far away it may group a row of planes into one detection. If viewing close enough, you can individually detect each plane by clicking each one.
The detected objects are added to the output feature layer.
The image below illustrates various object detection results returned by clicking in the view.