Azure Machine Learning services ease data science struggles
With Azure machine learning services, such as Workbench, Experimentation and Model Management, Microsoft hopes to address common data science challenges in the enterprise.
When unveiled in 2015, Azure Machine Learning services eliminated the need for data scientists to install and operate servers for data storage, extraction and model execution, but they only provided the basics. It still took users a fair amount of time to build and support a complete data science workflow.
To combat these issues, in 2017, Microsoft introduced three new machine learning services on Azure: Workbench, Experimentation and Model Management. These services address four main challenges that data scientists face:
- As the rate of machine learning experimentation increases, so does the work required to acquire, prepare and understand data
- Difficulties with scaling machine learning applications
- The proliferation of learning algorithm models, each of which requires different data sets and algorithm designs
- The need to use popular, open languages and development tools, along with a desire to incorporate new machine learning development frameworks
Here's a closer look at the three new Azure services mentioned above and how they aim to solve these issues.
Workbench
At a high level, a data science workflow consists of model development, experimentation and tuning, which then culminate in model deployment and management. Azure Machine Learning Workbench addresses the first step in that process.
Unlike the existing Azure Machine Learning development interface, Workbench is a packaged client application for Windows or macOS. Workbench provides a data science integrated development environment that facilitates data ingestion and preparation, model development and testing, and deployment to various runtime environments.
Workbench's features include:
- an interactive data preparation tool that can build transformation logic by example;
- a Python software development kit to invoke these data preparation packages;
- the Jupyter Notebook service and client interface;
- a model history interface to monitor and manage training runs;
- integration with Active Directory for role-based access control to facilitate secure collaboration; and
- automatic project snapshots for each run with version control through native Git integration.
Much like Visual Studio, Workbench uses a project metaphor to manage development via a logical container for model code, raw and processed data, Model Metrics and Run History. With all the resources in one package, users can replicate a project to a Git repo and access it via another system on which Workbench is installed.
Experimentation
Experimentation is the control plane for machine learning model training runs that facilitate execution on a local computer, a local Docker container, a remote compute instance or container, or an Apache Spark cluster. The service works with Workbench projects and supports other features, including Git integration, access controls, project roaming and sharing.
With its isolated project files, Experimentation provides an environment to run a large number of model training runs that might vary by run configuration, model parameters and the type of execution environment, such as different types of VM instances. Users can automate experiments via a Python or PySpark script.
Model Management
While Experimentation organizes and automates the execution of machine learning models, Model Management registers and tracks the various training runs and manages the results with model versions and forks. Together, these two Azure Machine Learning services provide a version history of models from initial development and training runs through production deployments.
Containers in Azure Machine Learning
Azure Machine Learning uses Docker containers to encapsulate and host models, which ensure portability and reproducibility across different runtime environments. Just like other applications, developers can register model containers in Azure Container Registry and deploy them using an automated toolchain.
When you encapsulate models in a container, it affords you more control over the deployment environment. For example, you could run a subset of your compute-intensive workloads on a dedicated cluster and run other models with variable demand on a cluster that scales up and down according to a set schedule.
Aside from model version control, the Model Management service:
- tracks models as they run in production;
- deploys models to production VMs or containers using Azure Container Service and Kubernetes;
- automatically triggers retraining runs with new data sets; and
- captures model metrics and results, and streams logs to Azure Application Insights.
Recommendations
Any developer that uses or even just plays around with Azure Machine Learning services should investigate these three tools. However, Experimentation and Model Management appeal most to those with substantial experience in machine learning model development and a portfolio of machine learning applications they wish to deploy.
Workbench is a welcome addition to data scientists, but those with more experience or who work in larger development teams in which machine learning is one element of an application should investigate Visual Studio Code Tools for AI. This is an extension that integrates with Azure Machine Learning services and augments Visual Studio with features to build, test and deploy machine learning and deep learning applications.