Label | Explanation | Data Type |
Input Folder or Table | The input can be either of the following:
| Folder; Feature Layer; Table View; Feature Class |
Output Model
| The output folder location where the trained model will be stored. | Folder |
Pretrained Model File
(Optional) | A pretrained model that will be used to fine-tune the new model. The input can be an Esri model definition file (.emd) or a deep learning package file (.dlpk). A pretrained model with similar entities can be fine-tuned to fit the new model. The pretrained model must have been trained with the same model type and backbone model that will be used to train the new model. | File |
Address Entity (Optional) | An address entity that will be treated as a location. During inference, such entities will be geocoded using the specified locator, and a feature class will be produced as a result of the entity extraction process. If no locator is provided or the trained model does not extract address entities, a table containing the extracted entities will be produced instead. | String |
Max Epochs
(Optional) | The maximum number of epochs for which the model will be trained. A maximum epoch value of 1 means the dataset will be passed through the neural network one time. The default value is 5. | Long |
Model Backbone
(Optional) | Specifies the preconfigured neural network that will be used as the architecture for training the new model.
| String |
Batch Size
(Optional) | The number of training samples that will be processed at one time. The default value is 2. Increasing the batch size can improve tool performance; however, as the batch size increases, more memory is used. If an out of memory error occurs, use a smaller batch size. | Double |
Model Arguments (Optional) | Additional arguments that will be used for initializing the model. The supported model argument is sequence_length, which is used to set the maximum sequence length of the training data that will be considered for training the model. | Value Table |
Learning Rate
(Optional) | The step size indicating how much the model weights will be adjusted during the training process. If no value is specified, an optimal learning rate will be derived automatically. | Double |
Validation Percentage (Optional) | The percentage of training samples that will be used for validating the model. The default value is 10 for transformer-based model backbones and 50 for the Mistral backbone. | Double |
Stop when model stops improving
(Optional) | Specifies whether model training will stop when the model is no longer improving or continue until the Max Epochs parameter value is reached.
| Boolean |
Make model backbone trainable
(Optional) | Specifies whether the backbone layers in the pretrained model will be frozen, so that the weights and biases remain as originally designed.
| Boolean |
Text Field | A text field in the input feature class or table that contains the text that will be used by the model as input. This parameter is required when the Input Folder or Table parameter value is a feature class or table. | Field |
Prompt
(Optional) | A specific input or instruction given to a large language model (LLM) to generate an expected output. The default value is Extract named entities belonging to the specified classes within the provided text. Do not tag entities belonging to any other class. | String |
Summary
Trains a named entity recognition model to extract a predefined set of entities from raw text.
Usage
This tool requires deep learning frameworks be installed. To set up your machine to use deep learning frameworks in ArcGIS Pro, see Install deep learning frameworks for ArcGIS.
This tool can also be used to fine-tune an existing trained model.
To run this tool using GPU, set the Processor Type environment to GPU. If you have more than one GPU, specify the GPU ID environment instead.
The input can be a feature class or table with a text field and named entity labels, or a folder with training data in .json or .csv files.
This tool uses transformer-based backbones for training NER models and also supports in-context learning with prompts using the Mistral LLM. To install the Mistral backbone, see ArcGIS Mistral Backbone.
For information about requirements for running this tool and issues you may encounter, see Deep Learning frequently asked questions.
Parameters
arcpy.geoai.TrainEntityRecognitionModel(in_folder, out_model, {pretrained_model_file}, {address_entity}, {max_epochs}, {model_backbone}, {batch_size}, {model_arguments}, {learning_rate}, {validation_percentage}, {stop_training}, {make_trainable}, text_field, {prompt})
Name | Explanation | Data Type |
in_folder | The input can be either of the following:
| Folder; Feature Layer; Table View; Feature Class |
out_model | The output folder location where the trained model will be stored. | Folder |
pretrained_model_file (Optional) | A pretrained model that will be used to fine-tune the new model. The input can be an Esri model definition file (.emd) or a deep learning package file (.dlpk). A pretrained model with similar entities can be fine-tuned to fit the new model. The pretrained model must have been trained with the same model type and backbone model that will be used to train the new model. | File |
address_entity (Optional) | An address entity that will be treated as a location. During inference, such entities will be geocoded using the specified locator, and a feature class will be produced as a result of the entity extraction process. If no locator is provided or the trained model does not extract address entities, a table containing the extracted entities will be produced instead. | String |
max_epochs (Optional) | The maximum number of epochs for which the model will be trained. A maximum epoch value of 1 means the dataset will be passed through the neural network one time. The default value is 5. | Long |
model_backbone (Optional) | Specifies the preconfigured neural network that will be used as the architecture for training the new model.
| String |
batch_size (Optional) | The number of training samples that will be processed at one time. The default value is 2. Increasing the batch size can improve tool performance; however, as the batch size increases, more memory is used. If an out of memory error occurs, use a smaller batch size. | Double |
model_arguments [model_arguments,...] (Optional) | Additional arguments that will be used for initializing the model. The supported model argument is sequence_length, which is used to set the maximum sequence length of the training data that will be considered for training the model. | Value Table |
learning_rate (Optional) | The step size indicating how much the model weights will be adjusted during the training process. If no value is specified, an optimal learning rate will be derived automatically. | Double |
validation_percentage (Optional) | The percentage of training samples that will be used for validating the model. The default value is 10 for transformer-based model backbones and 50 for the Mistral backbone. | Double |
stop_training (Optional) | Specifies whether model training will stop when the model is no longer improving or continue until the max_epochs parameter value is reached.
| Boolean |
make_trainable (Optional) | Specifies whether the backbone layers in the pretrained model will be frozen, so that the weights and biases remain as originally designed.
| Boolean |
text_field | A text field in the input feature class or table that contains the text that will be used by the model as input. This parameter is required when the in_folder parameter value is a feature class or table. | Field |
prompt (Optional) | A specific input or instruction given to a large language model (LLM) to generate an expected output. The default value is Extract named entities belonging to the specified classes within the provided text. Do not tag entities belonging to any other class. | String |
Code sample
The following example demonstrates how to use the TrainEntityRecognitionModel function.
# Name: TrainEntityRecognizer.py
# Description: Train an Entity Recognition model to extract useful entities such as "Address", "Date" from text.
# Import system modules
import arcpy
arcpy.env.workspace = "C:/textanalysisexamples/data"
# Set local variables
in_folder = "train_data"
out_folder = "test_bio_format"
# Run Train Entity Recognition Model
arcpy.geoai.TrainEntityRecognitionModel(in_folder, out_folder)
Environments
Licensing information
- Basic: No
- Standard: No
- Advanced: Yes