Automated Machine Learning (AutoML)

In the past decade, machine learning has experienced explosive growth in both the range of applications it is applied to, and the amount of new research produced on it. Some of the largest driving forces behind this growth are the maturity of the ML algorithms and methods themselves, the generation and proliferation of massive volumes of data for the algorithms to learn from, the abundance of cheap compute to run the algorithms, and the increasing awareness among businesses that ML algorithms can address complex data structures and problems.

Many organizations want to use ML to take advantage of their data and derive actionable new insights from it, but it has become clear that there is an imbalance between the number of potential ML applications and the amount of trained, expert ML practitioners to address them. As a result, there is an increasing demand to democratize ML across organizations by creating tools that make ML widely accessible throughout the organization and can be used off-the-shelf by non-ML experts and domain experts.

Recently, Automated Machine Learning (AutoML) has emerged as way to address the massive demand for ML within organizations across all experience/skill levels. AutoML aims to create a single system to automate (e.g. remove human input from) as much of the ML workflow as possible, including data preparation, feature engineering, model selection, hyperparameter tuning, and model evaluation. In doing so, it can be beneficial to non-experts by lowering their barrier of entry into ML, but also to trained ML practitioners by eliminating some of the most tedious and time-consuming steps in the ML workflow.

AutoML for the non-ML expert (GIS analyst/Business analyst/Data analyst who are domain experts)

For the non-ML expert, the key advantage of using AutoML is that it eliminates some of the steps in the ML workflow that require the most technical expertise and understanding. Analysts who are domain experts can define their business problem and collect the appropriate data, then essentially let the computer learn to do the rest. They don’t need a deep understanding of data science techniques for data cleaning and feature engineering, they don’t have know what all the different ML algorithms do, and they don’t need to spend time aimlessly exploring different algorithms and hyperparameter configurations. Instead, these analysts can focus on applying their domain expertise to the specific business problem/domain application at hand, rather than on the ML workflow itself. Additionally, they can be less dependent on trained data scientists and ML engineers within their organization because they can build and utilize advanced models on their own, often without any coding experience required.

AutoML for the non-ML expert (GIS analyst/Business analyst/Data analyst who are domain experts)

For the non-ML expert, the key advantage of using AutoML is that it eliminates some of the steps in the ML workflow that require the most technical expertise and understanding. Analysts who are domain experts can define their business problem and collect the appropriate data, then essentially let the computer learn to do the rest. They don’t need a deep understanding of data science techniques for data cleaning and feature engineering, they don’t have know what all the different ML algorithms do, and they don’t need to spend time aimlessly exploring different algorithms and hyperparameter configurations. Instead, these analysts can focus on applying their domain expertise to the specific business problem/domain application at hand, rather than on the ML workflow itself. Additionally, they can be less dependent on trained data scientists and ML engineers within their organization because they can build and utilize advanced models on their own, often without any coding experience required.

AutoML for the ML expert (Data scientist/ML engineer)

AutoML can also be hugely beneficial to ML experts, however the reasons may be less obvious. For one, ML experts do not have to spend as much time supporting the domain experts in their organization, and can therefore focus on their own, more advanced ML work. When it comes to the ML experts’ actual ML projects, AutoML can be a tremendous time saver and productivity booster. Much of the time consuming, tedious steps in the ML workflow such as data cleaning, feature engineering, model selection, and hyperparameter tuning can be automated. The time saved by automating many of these repetitive, exploratory steps can be shifted to more advanced technical tasks or to tasks that require more human input (e.g. collaborating with domain experts, understanding the business problem, or interpreting the ML results).

In addition to its time saving aspects, AutoML can also help boost the productivity of ML practitioners because it eliminates some of the subjective choice and experimentation involved in the ML workflow. For example, a ML practitioner approaching a new project may in theory have the training and expertise to guide them on which new features to construct, which ML algorithm might be the best for a particular problem, and which hyperparameters could be most optimal. However, they may overlook the construction of certain new features, or fail to try all the different combinations of hyperparameters that are possible while they are actually performing the ML workflow. Additionally, the ML practitioner may bias the feature selection process or choice of algorithm because they prefer a particular ML algorithm based on their previous work or its success in other ML applications they’ve seen. In reality, no single ML algorithm performs best on all datasets, certain ML algorithms are more sensitive than others to the selection of hyperparameters, and many business problems have varying degrees of complexity and requirements for interpretability from the ML algorithms used to solve them. AutoML can help reduce some of this human bias by applying many different ML algorithms to the same dataset and then determining which one performs best. AutoML also uses advanced techniques such as model ensembling that helps push the accuracy of models even further.

For the ML practitioner, AutoML can also serve as an initial starting point or benchmark in an ML project. They can use it to automatically develop a baseline model for a dataset, which can give them a set of preliminary insights into a particular problem. From here, they may decide to add or remove certain features from the input dataset, or hone in on a particular ML algorithm and fine tune it’s hyperparameters. In this sense, AutoML can be viewed as a means of narrowing down the set of initial choices for a trained ML practitioner, so they can focus on improving the performance of the ML model overall. This is a very commonly used workflow in practice, where ML experts will develop a data-driven benchmark using AutoML, then build upon this benchmark by incorporating their expertise to tweak and refine the results. The ML tools in ArcGIS Pro, along with the MLModel class in ArcGIS API for Python let them build upon this strong baseline and arrive at the most suitable model.

In the end, democratizing ML via AutoML within an organization allows domain experts to focus their attention on the business problem and obtain actionable results, it allows more analysts to build better models, and it can reduce the number of ML experts that the organization needs to hire. It can also help boost the productivity of trained ML practitioners and data scientists, allowing them to focus their expertise on the multitude of other tasks where it is needed most.

To help with these tasks, the GeoAI toolbox in ArcGIS provides tools that use AutoML to train, fine tune and ensemble several popular machine learning models for classification and regression given your data and available compute resources.

Training a good model takes work, but the benefits can be enormous. Good AI models can be powerful tools when automating the analysis of huge volumes of geospatial data. However, care should be taken to ensure that they are applied to relevant tasks, with the appropriate level of human oversight and transparency about the type of model and training datasets used for training the model.

Related topics