violetkaipa - Fotolia

Google adds Firebase AI tools to make ML mobile dev easier

A new Google Firebase update simplifies machine learning app development. The update is part of a trend of making machine learning more accessible for developers to create and deploy.

Firebase, Google's mobile and web development platform, now supports new features and tools designed to make it easier for developers to create machine learning capabilities and integrate them with their applications.

Google unveiled three new capabilities at Google I/O 2019 this week to Google Firebase in beta: Translation API, Object Detection and Tracking API, and AutoML Vision Edge. The Mountain View tech giant also showcased other new tools for monitoring application performance for the web.

The new Firebase AI tools expand the platform's ML Kit, a suite of capabilities to simplify machine learning building. That suite already included a variety of vision and NLP features, as well as support for deploying custom TFLite models.

Simplifying AI for developers

The goal of ML Kit, introduced last year, is to provide a set of ready-to-use APIs, said Francis Ma, head of product and design for Firebase at Google.

"These are common, common use cases we've heard from developers," Ma said in phone interview. "All of our offerings are designed for that ease of use for mobile developers."

Like the rest of the Firebase ML Kit offerings, the new Firebase AI tools, Ma said, can be easily used by app developers who might not otherwise have the knowhow to create machine learning models, or they can be adapted and expanded upon by those who have a machine learning background. Both small, independent developers and enterprises use the ML Kit, Ma said.

The new updates represent 'common, common use cases,' referring to tools for quickly labeling data and creating custom models, among others. 'All of our offerings are designed for that ease of use for mobile developers.'
Francis MaHead of product and design for Firebase, Google

None of the new Firebase AI tools are necessarily unique -- they are easy-to-use, repackaged versions of other Google offerings, designed specifically for mobile and web development.

The new tools

The Translation API, for example, draws on the technology in Google Translate. The API works across 58 languages and can also run offline, Ma explained.

Meanwhile, the Object Tracking and Detection API enables developers to track the most prominent object inside a live video in real time.

According to Ma, Ikea is already using this API to power its mobile retail application. The app features a visual search tool that enables customers to point their camera at an object on the sales floor and receive detailed information on it.

Google recently packaged several tools to make AI and machine learning development and integration simpler for retailers in Google Cloud for Retail.

"We see different retailers take advantage of Google Cloud in different ways," Ma said.

The AutoML Vision Edge Firebase AI tool enables developers to train vision models using their labeled images. The model can then run on-device inference. Previously, AutoML Vision capabilities ran on cloud.

At the Google Next '19 conference in April, Google introduced several updates to its Cloud AutoML product, including AutoML Tables, a codeless model development tool.

The new Firebase AI tools follow a movement by Google and other large tech vendors to provide developers with more powerful and easier to use machine learning and AI development tools, even as that seemingly contradicts the push for more explainable AI.

However, a Google competitor offered another view of drag and drop interfaces -- of the ilk Google just released -- that enable developers without machine learning backgrounds to create models.

They can be "really dangerous," maintained Colin Priest, senior director of product marketing at DataRobot Inc., an AI and analytics vendor. DataRobot sells a platform that assists with creating machine learning models while still requiring some technical expertise.

"You really need to be an expert to do [drag and drop]," Priest said, adding that, otherwise, the developers themselves might not fully understand their AI models, let alone users.

Such tools may be at odds with the recent push for AI explainability, he added.

Simplifying AI everywhere

At Facebook's F8 developer conference, held April 30-May 1, the social media giant open-sourced two new tools, Ax and BoTorch, to help simplify and optimize complex machine learning processes and development. The tools sit on PyTorch, Facebook's Torch-based, open source machine learning library for Python.

That, too, received an update, which included new APIs and performance upgrades, as well as native support for TensorBoard.

"It seems like Facebook is productionizing and preparing PyTorch for AI and ML projects running at scale and in production capacities," said Dave Schubmehl, a research director at IDC.

"This is in line with other organizations that have been adding to the capabilities and potential scaling of their frameworks as well," he continued, referring to Google, Nvidia and others.

Meanwhile, at Microsoft Build 2019, held May 6-8 in Seattle, Microsoft introduced new AI and machine learning development capabilities, including a tool for creating models in Azure Machine Learning that features a drag-and-drop code interface.

Google, in addition to the new Firebase AI tools, last month unveiled a platform for more easily creating, deploying and managing machine learning models. The Google Cloud AI Platform unifies some of Google's existing AI tools and adds new ones.

"We have spoken about being an AI-first company," Ma said. "We want to be able to bring AI to not only help our own advancements, but to the developer ecosystem as well."

Next Steps

Google Cloud launches Vertex AI managed ML platform with MLOps

Dig Deeper on Artificial intelligence platforms