blobbotronic - stock.adobe.com
Google Cloud Explainable AI aims to clear up outputs
A new Google product suite enables users to create and deploy more interpretable machine learning models with output visualizations, feature attribution and evaluation tools.
Google released Google Cloud Explainable AI, a new product suite to enable enterprises to deploy more interpretable machine learning models, marking a continuation of over a year of the tech giant's efforts to provide tools to create and deploy more responsible AI.
The Mountain View, Calif. vendor, while often the target of criticism over data privacy violations, has also been active in the fast-growing area of developing more open and explainable AI technology.
"Google has been active at the macro level around important topics like AI responsibility and governance, largely based on educating customers on key principles for AI, and focusing on macro areas like ethics, policy and strategy," said Nick McQuire, vice president at advisory firm CCS Insight.
Google Cloud Explainable AI, introduced Nov. 21, contains a set of tools and frameworks for deploying and governing interpretable machine learning models.
A key element of the package is AI Explanations, a new beta product that integrates feature attribution into Google's AI Platform Prediction, which enables users to request predictions from their cloud-hosted machine learning models.
Explaining AI
With AI Explanations, users can see how much each feature in a model contributed to a prediction. This helps quantify each feature's contribution to the output of a model.
Nick McQuireVice president, CCS Insight
"From today's standpoint, these steps will improve the process without question because often the starting point is firms have very little interpretability in the models that they build, which are often black box systems," McQuire said.
"This can work in training scenarios or when the projects are confined to the data science or developer departments," he continued.
The product suite also includes Google's What-If tool, an interactive interface that enables users to visualize the performance of their models. The visual elements enable users with minimal coding experience to plot their models across a variety of graphs, and more easily identify the outputs of different pieces of data.
Users of the What-If tool can also edit, add or remove the features for data points, and can compare the performance of two models against each other.
Google's AI Platform also contains monitoring capabilities, as well as continuous evaluation, which regularly samples inputs and outputs from machine learning models with ground truth labels -- labels assigned by human reviewers. This provides continuous feedback on a model's performance.
"Google Cloud Explainable AI includes capabilities for both explaining models during model development, with What-If, and during production, with continuous evaluation. Both are critical to having confidence in AI," said Mike Gualtieri, a Forrester AI analyst.
More responsible machine learning
The Google Cloud Explainable AI suite is in line with recent efforts Google has made to develop more tools for responsible AI, McQuire said.
"Google has also been quietly releasing internal tools into open source in this area over the past year as well, such as its What If Tool, Facets and Tensor Flow Extended," McQuire said.
"Now we are seeing the firm build on these moves in effect by creating instrumentation in explainability into its AI platform, making them more enterprise-grade, if you like, which is what the market has been asking for," he continued.
Explainable AI is a hot topic, as governments begin to regulate data use more, and AI ethics and enterprise leaders demand better explanations of their model outputs before acting on them.
"Almost all ML tool vendors either have a capability or have a similar capability on their roadmap," Gualtieri noted.
He added that standalone products from other vendors for monitoring and explaining machine learning models are available, notably IBM Watson OpenScale.
Over the last year, IBM released a few tools to help developers create more interpretable models, and for users to better understand their outputs. A number of startups selling tools to deploy interpretable AI models or better govern models have also appeared in the last few years.
According to McQuire, "One notable exception is AWS, who has been late to the party, but I would expect this will be a big focus at re:Invent in a few weeks' time."
AWS re:Invent 2019, the company's annual flagship conference, is Dec. 1 - 6 in Las Vegas.