arguments [arguments,...] (Optional) | The function arguments are defined in the Python raster function class. This is where you list additional deep learning parameters and arguments for experiments and refinement, such as a confidence threshold for adjusting sensitivity. The names of the arguments are populated from reading the Python module. When you choose SSD as the model_type parameter value, the arguments parameter will be populated with the following arguments: - grids—The number of grids the image will be divided into for processing. Setting this argument to 4 means the image will be divided into 4 x 4 or 16 grid cells. If no value is specified, the optimal grid value will be calculated based on the input imagery.
- zooms—The number of zoom levels each grid cell will be scaled up or down. Setting this argument to 1 means all the grid cells will remain at the same size or zoom level. A zoom level of 2 means all the grid cells will become twice as large (zoomed in 100 percent). Providing a list of zoom levels means all the grid cells will be scaled using all the numbers in the list. The default is 1.0.
- ratios—The list of aspect ratios to use for the anchor boxes. In object detection, an anchor box represents the ideal location, shape, and size of the object being predicted. Setting this argument to [1.0,1.0], [1.0, 0.5] means the anchor box is a square (1:1) or a rectangle in which the horizontal side is half the size of the vertical side (1:0.5). The default is [1.0, 1.0].
When you choose any of the pixel classification models such as PSPNET, UNET, or DEEPLAB as the model_type parameter value, the arguments parameter will be populated with the following arguments: - USE_UNET—The U-Net decoder will be used to recover data once the pyramid pooling is complete. The default is True. This argument is specific to the PSPNET model.
- PYRAMID_SIZES—The number and size of convolution layers to be applied to the different subregions. The default is [1,2,3,6]. This argument is specific to the PSPNET model.
- MIXUP—Specifies whether mixup augmentation and mixup loss will be used. The default is False.
- CLASS_BALANCING—Specifies whether the cross-entropy loss inverse will be balanced to the frequency of pixels per class. The default is False.
- FOCAL_LOSS—Specifies whether focal loss will be used. The default is False.
- IGNORE_CLASSES—Contains the list of class values on which the model will not incur loss.
When you choose RETINANET as the model_type parameter value, the arguments parameter will be populated with the following arguments: - SCALES—The number of scale levels each cell will be scaled up or down. The default is [1, 0.8, 0.63].
- RATIOS—The aspect ratio of the anchor box. The default is [0.5,1,2].
When you choose MULTITASK_ROADEXTRACTOR or ConnectNet as the model_type parameter value, the arguments parameter will be populated with the following arguments: - gaussian_thresh—Sets the Gaussian threshold, which sets the required road width. The valid range is 0.0 to 1.0. The default is 0.76.
- orient_bin_size—Sets the bin size for orientation angles. The default is 20.
- orient_theta—Sets the width of orientation mask. The default is 8.
- mtl_model—Sets the architecture type that will be used to create the model. Valid choices are linknet or hourglass for linknet-based or hourglass-based, respectively, neural architectures. The default is hourglass.
When you choose IMAGECAPTIONER as the model_type parameter value, the arguments parameter will be populated with the following arguments: - decode_params—A dictionary that controls how the Image Captioner will run. The default value is {'embed_size':100, 'hidden_size':100, 'attention_size':100, 'teacher_forcing':1, 'dropout':0.1, 'pretrained_emb':False}.
- chip_size—Sets the size of image to train the model. Images are cropped to the specified chip size. If image size is less than chip size, the image size is used. The default size is 224 pixels.
The decode_params, are comprised of the following six parameters:- embed_size—Sets the embedding size. The default is 100 layers in the neural network.
- hidden_size—Sets the hidden layer size. The default is 100 layers in the neural network.
- attention_size—Sets the intermediate attention layer size . The default is 100 layers in the neural network.
- teacher_forcing—Sets the probability of teacher forcing. Teacher forcing is a strategy for training recurrent neural networks that uses model output from a prior time step as an input, instead of the previous output, during back propagation. Valid ranges is from 0.0 to 1.0. The default is 1.
- dropout—Sets the dropout probability. Valid ranges is from 0.0 to 1.0. The default is 0.1.
- pretrained_emb—Sets the pretrained embedding flag. If True, it will use fast text embedding. If False, it will not use the pretrained text embeddings. The default is False.
When you choose CHANGEDETECTOR as the model_type parameter value, the arguments parameter will be populated with the following argument: - attention_type—Specifies the module type. The module choices are PAM (Pyramid Attention Module) or BAM (Basic Attention Module). The default is PAM.
All model types support the chip_size argument, which is the chip size of the tiles in the training samples. The image chip size is extracted from the .emd file from the folder specified in the in_folder parameter. | Value Table |
backbone_model (Optional) | Specifies the preconfigured neural network that will be used as the architecture for training the new model. This method is known as Transfer Learning. - DENSENET121 —The preconfigured model will be a dense network trained on the ImageNET Dataset that contains more than 1 million
images and is 121 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.
- DENSENET161 —The preconfigured model will be a dense network trained on the ImageNET Dataset that contains more than 1 million
images and is 161 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.
- DENSENET169 —The preconfigured model will be a dense network trained on the ImageNET Dataset that contains more than 1 million
images and is 169 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.
- DENSENET201 —The preconfigured model will be a dense network trained on the ImageNET Dataset that contains more than 1 million
images and is 201 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation.
- MOBILENET_V2 —This preconfigured model will be trained on the ImageNet Database and is 54 layers deep geared toward Edge device computing, since it uses less memory.
- RESNET18 —The preconfigured model will be a residual network
trained on the ImageNET Dataset that contains more than million
images and is 18 layers deep.
- RESNET34 —The preconfigured model will be a residual network
trained on the ImageNET Dataset that contains more than 1 million
images and is 34 layers deep. This is the default.
- RESNET50 —The preconfigured model will be a residual network
trained on the ImageNET Dataset that contains more than 1 million
images and is 50 layers deep.
- RESNET101 —The preconfigured model will be a residual network
trained on the ImageNET Dataset that contains more than 1 million
images and is 101 layers deep.
- RESNET152 —The preconfigured model will be a residual network
trained on the ImageNET Dataset that contains more than 1 million
images and is 152 layers deep.
- VGG11 —The preconfigured model will be a convolution neural network trained on the ImageNET Dataset that contains more than 1 million
images to classify images into 1,000 object categories and is 11 layers deep.
- VGG11_BN —This preconfigured model will be based on the VGG network but with batch normalization, which means each layer in the network is normalized. It trained on the ImageNet dataset and has 11 layers.
- VGG13 —The preconfigured model will be a convolution neural network trained on the ImageNET Dataset that contains more than 1 million
images to classify images into 1,000 object categories and is 13 layers deep.
- VGG13_BN —This preconfigured model will be based on the VGG network but with batch normalization, which means each layer in the network is normalized. It trained on the ImageNet dataset and has 13 layers.
- VGG16 —The preconfigured model will be a convolution neural network trained on the ImageNET Dataset that contains more than 1 million
images to classify images into 1,000 object categories and is 16 layers deep.
- VGG16_BN —This preconfigured model will be based on the VGG network but with batch normalization, which means each layer in the network is normalized. It trained on the ImageNet dataset and has 16 layers.
- VGG19 —The preconfigured model will be a convolution neural network trained on the ImageNET Dataset that contains more than 1 million
images to classify images into 1,000 object categories and is 19 layers deep.
- VGG19_BN —This preconfigured model will be based on the VGG network but with batch normalization, which means each layer in the network is normalized. It trained on the ImageNet dataset and has 19 layers.
- DARKNET53 —The preconfigured model will be a convolution neural network trained on the ImageNET Dataset that contains more than 1 million images and is 53 layers deep.
| String |