Label | Explanation | Data Type |
Input Point Cloud
| The point cloud that will be used to create the training data for object detection. | LAS Dataset Layer; File |
Input Training Features
| The multipatch features that will identify the objects that will be used for training the model. | Feature Layer |
Input Validation Features
| The multipatch features that will identify the objects that will be used for validating the model during the training process. | Feature Layer |
Block Size
| The diameter of each block of training data that will be created from the input point cloud. As a general rule, the block size should be large enough to capture the objects of interest and their surrounding context. | Linear Unit |
Output Training Data
| The location and name of the output training data (a *.pcotd file). | File |
Training Boundary Features
(Optional) | The polygon features that will delineate the subset of points from the input point cloud that will be used for training the model. This parameter is required when the Validation Point Cloud parameter value is not provided. | Feature Layer |
Training Code Field
(Optional) | The field that identifies the unique ID for each type of object in the training multipatch features. When no field is defined, the objects are assigned an ID of 0. | Field |
Validation Point Cloud
(Optional) | The point cloud that will be used to validate the deep learning model during the training process. This dataset must reference a different set of points than the input point cloud to ensure the quality of the trained model. If a validation point cloud is not provided, the input point cloud can be used to define the training and validation datasets by providing polygon feature classes for the Training Boundary Features and Validation Boundary Features parameters. | LAS Dataset Layer; File |
Validation Boundary Features
(Optional) | The polygon features that will delineate the subset of points to be used for validating the model during the training process. If a validation point cloud is not provided, the points will be sourced from the input point cloud and a polygon will be required for the Training Boundary Features parameter. | Feature Layer |
Validation Code Field
(Optional) | The field that identifies the unique ID for each type of object in the validation multipatch features. When no field is defined, the objects are assigned an ID of 0. | Field |
Block Point Limit
(Optional) | The maximum number of points that can be stored in each block of the training data. When a block contains points in excess of this value, multiple blocks will be created for the same location to ensure that all of the points are used when training. The default is 500,000. | Long |
Reference Height Surface
(Optional) | The raster surface that will be used to provide relative height values for each point in the point cloud data. Points that do not overlap with the raster will be omitted from the analysis. | Raster Layer |
Excluded Class Codes
(Optional) | The class codes that will be excluded from the training data. Any value in the range of 0 to 255 can be specified. | Long |
Only export training blocks that contain objects
(Optional) | Specifies whether the training data will only include blocks that contain objects or if blocks that do not contain objects will also be included. The data used for validation will not be affected by this parameter.
| Boolean |
Summary
Creates point cloud training data for object detection models using deep learning.
Illustration
Usage
The point cloud object detection training data is defined by a directory with a .pcotd extension that contains two subdirectories: one containing data used for training the model and the other containing data used for validating the model throughout the training process. An input point cloud must always be specified along with separate multipatch features representing the bounding boxes of objects for training and validation. A boundary polygon can be provided to limit the data that is exported for training. The validation data can be defined by the following:
- Provide a validation point cloud in addition to the input point cloud. This dataset must reference a different set of points than the input point cloud. A boundary can also be specified to clip the validation point cloud.
- Provide only an input point cloud with a training and validation boundary. This will result in both the training and validation data being sourced from the same input point cloud, so there is no need to specify a dataset for the Validation Point Cloud parameter. Avoid overlaps between the two boundary polygon datasets so you don't use the same point cloud data for training and validation.
Contain each object type that is present in the point cloud within a multipatch bounding box. Unidentified objects in the training or validation data will result in the model being unable to effectively learn how to identify the object. If the point cloud contains unidentified objects, use boundary features to limit the exported training datasets to places where objects were properly contained in a bounding box.
The points representing the objects do not need to be classified to be used in the training dataset for object detection. This simplifies the task of labeling the objects to only creating bounding boxes as multipatch features. Bounding boxes can be generated through interactive 3D editing of a multipatch feature class. However, if objects are represented by classified point clouds, bounding boxes for those points can be created through the Extract Objects From Point Cloud tool.
The input point cloud should have a fairly consistent point density. Evaluate the point cloud to determine if it contains locations with a higher density of points, such as areas collected by overlapping flight line surveys or idling terrestrial scanners. For airborne lidar with overlapping flight lines, the Classify LAS Overlap tool can be used to flag the overlapping points and achieve a more consistent point distribution. Other types of point clouds with oversampled hot spots can be thinned to a regular distribution using the Thin LAS tool.
Points in the point cloud can be excluded from the training data by their class codes to help improve the performance of training the model by reducing the amount of points that have to be processed. Excluded points should belong to classes that can be readily classified and do not necessarily provide adequate context for the objects for which the model is being trained. Consider filtering out points that are classified as overlap or noise. Ground classified points can also be filtered out if height from ground is computed during the generation of training data.
When possible, specify a block size that sufficiently captures the objects for which the model will be trained. While each block may not always contain the entire object, the overlapping blocks that are created in the training data will capture a sufficient amount of varied representations of the object for training a successful model.
The block point limit should reflect the block size and average point spacing of the data. The number of points in a given block can be approximated using the LAS Point Statistics As Raster tool with the Method parameter's Point Count option and the desired block size as the output raster's cell size. An image histogram of this raster can illustrate the distribution of points per block across the dataset. If the histogram conveys a large number of blocks with wide variance, it may indicate the presence of irregularly sampled data containing potential hot spots of dense point collections. If a block contains more points than the block point limit, that block will be created multiple times to ensure all of its points are represented in the training data. For example, if the point limit is 10,000 and a given block contains 22,000 points, three blocks of 10,000 points will be created to ensure uniform sampling in each block. A block point limit that is significantly higher than the nominal amount of points in most blocks should also be avoided. In some architectures, the data is up-sampled to meet the point limit. For these reasons, use a block size and block point limit that will be close to the anticipated point count that covers most of the blocks in the training data. Once the training data is created, a histogram is displayed in the tool's message window, and an image of it is stored in the folder containing the training and validation data. This histogram can be reviewed to determine if an appropriate block size and point limit combination was specified. If the values indicate a suboptimal point limit, rerun the tool with a more appropriate value for the Block Point Limit parameter.
The block point limit should factor the dedicated GPU memory capacity on the computer that will be used for training. Memory allocation during training will depend on the number of points per block, the attributes that are used, and the total number of blocks that are simultaneously processed in a given batch. If a larger block size and point limit is needed to effectively train the model, the batch size can be reduced in the training step to ensure that more points can be processed.
-
Ensure that the output is written to a location with enough disk space to accommodate the training data. This tool creates partially overlapping blocks of uncompressed HDF5 files that replicate each point in four blocks. In blocks that exceed the maximum point limit, some points may be duplicated more than four times. The resulting training data can occupy at least three times more disk space than the source point cloud data.
The message window of the tool displays a quartile ratio for each type of object. This ratio is calculated by dividing the volumes of objects in the third quartile by the first quartile. This metric serves as an indicator of the size variability among the objects. A larger quartile ratio suggests greater variability in the volumes of objects, whereas a smaller ratio indicates less variability. If there is a significant variation in volume, you may need to adjust the voxel parameters in the Train Point Cloud Object Detection Model tool's Architecture Settings parameter to obtain an accurate model.
Parameters
arcpy.ddd.PreparePointCloudObjectDetectionTrainingData(in_point_cloud, in_training_features, in_validation_features, block_size, out_training_data, {training_boundary}, {training_code_field}, {validation_point_cloud}, {validation_boundary}, {validation_code_field}, {block_point_limit}, {reference_height}, {excluded_class_codes}, {blocks_contain_objects})
Name | Explanation | Data Type |
in_point_cloud | The point cloud that will be used to create the training data for object detection. | LAS Dataset Layer; File |
in_training_features | The multipatch features that will identify the objects that will be used for training the model. | Feature Layer |
in_validation_features | The multipatch features that will identify the objects that will be used for validating the model during the training process. | Feature Layer |
block_size | The diameter of each block of training data that will be created from the input point cloud. As a general rule, the block size should be large enough to capture the objects of interest and their surrounding context. | Linear Unit |
out_training_data | The location and name of the output training data (a *.pcotd file). | File |
training_boundary (Optional) | The polygon features that will delineate the subset of points from the input point cloud that will be used for training the model. This parameter is required when the validation_point_cloud parameter value is not provided. | Feature Layer |
training_code_field (Optional) | The field that identifies the unique ID for each type of object in the training multipatch features. When no field is defined, the objects are assigned an ID of 0. | Field |
validation_point_cloud (Optional) | The source of the point cloud that will be used to validate the deep learning model. This dataset must reference a different set of points than the input point cloud to ensure the quality of the trained model. If a validation point cloud is not provided, the input point cloud can be used to define the training and validation datasets by providing polygon feature classes for the training_boundary and validation_boundary parameters. | LAS Dataset Layer; File |
validation_boundary (Optional) | The polygon features that will delineate the subset of points to be used for validating the model during the training process. If a validation point cloud is not provided, the points will be sourced from the input point cloud and a polygon will be required for the training_boundary parameter. | Feature Layer |
validation_code_field (Optional) | The field that identifies the unique ID for each type of object in the validation multipatch features. When no field is defined, the objects are assigned an ID of 0. | Field |
block_point_limit (Optional) | The maximum number of points that can be stored in each block of the training data. When a block contains points in excess of this value, multiple blocks will be created for the same location to ensure that all of the points are used when training. The default is 500,000. | Long |
reference_height (Optional) | The raster surface that will be used to provide relative height values for each point in the point cloud data. Points that do not overlap with the raster will be omitted from the analysis. | Raster Layer |
excluded_class_codes [excluded_class_codes,...] (Optional) | The class codes that will be excluded from the training data. Any value in the range of 0 to 255 can be specified. | Long |
blocks_contain_objects (Optional) | Specifies whether the training data will only include blocks that contain objects or if blocks that do not contain objects will also be included. The data used for validation will not be affected by this parameter.
| Boolean |
Code sample
The following sample demonstrates the use of this tool in the Python window.
import arcpy
arpy.env.workspace = r"C:\GIS_Data"
arcpy.ddd.PreparePointCloudObjectDetectionTrainingData("Training.lasd", r"Objects.fgdb\Training_FCs",
r"Objects.fgdb\Validation_FCs", "12 Meters",
"Training_Cars.pcotd", training_code_field="Car_Type",
validation_code_field="Car_Type", reference_surface="DEM.tif",
excluded_classes=[2, 7, 18])
Environments
Licensing information
- Basic: Requires 3D Analyst
- Standard: Requires 3D Analyst
- Advanced: Requires 3D Analyst