Google AI division open sources GPipe neural network library
Google AI, the AI research and development team at Google, made GPipe a framework for building large-scale and accurate deep neural networks open source.
Google open sourced GPipe, a scalable machine learning library designed to enable users to train large-scale deep neural networks faster, more accurately, and potentially with less compute power.
The tech vendor made the library available on GitHub March 4, open sourced under the Lingo framework, a TensorFlow-based deep learning framework designed specifically for linguistic sequence models.
The move is part of a trend in which big tech vendors, including Google, AWS and Facebook, are pushing out open source AI development tools, even as they move to protect and monetize others. To some, it's a tactic for further influencing the AI field; for others, it's simply a way of making AI more accessible.
"GPipe is a big step forward toward democratizing AI," Forrester principal analyst Mike Gualtieri said.
An effort to reach Google for comment was unsuccessful.
In a related development on March 6, Google's TensorFlow team said it had open sourced the TensorFlow Privacy tool.
Meanwhile, as GPipe enables users to create more accurate deep learning models, "making this available in open source will essentially allow anyone to harness the power of distributed machine learning to achieve higher accuracy in models," Gualtieri said.
Teaching a neural network
Created by Google AI, the tech giant's AI research and development branch, GPipe essentially partitions models across different GPU and TPU accelerators, but in such a way as to enable accelerators to operate in parallel.
Mike Gualtieriprincipal analyst, Forrester
GPipe splits training examples into "mini-batches" to determine model error, and then into even smaller "micro-batches," according to a late 2018 Google AI research paper. Different accelerators can run different micro-batches at once, and gradients are "consistently accumulated across micro-batches," Google said.
Using GPipe can be a faster, more accurate and less memory intensive way to train deep neural networks, which could be positive for researchers, and, according to Gualtieri, good news even for citizen data scientists.
"There is also an implication for auto-ML because GPipe can be used to automate model building to make data scientists more productive and even make citizen data scientists capable of producing business-ready models," Gualtieri said.
Open source trend
Over the last several years, Google's AI researchers have made publicly available numerous projects, including data sets, code and software. One of the company's most important public contributions was TensorFlow, a software library for AI and machine learning.
Originally developed for in-house use by Google Brain, the company's deep learning team, TensorFlow was open sourced at the end of 2015. TensorFlow experience is now a necessity for those involved in machine learning. Google recently released a new version of the library that supports JavaScript, which has already proved to be popular with developers.
TensorFlow Privacy
TensorFlow Privacy enables developers to create machine learning models that keep user data anonymous.
Now available on GitHub, TensorFlow Privacy uses techniques based on the concept of differential privacy -- that AI models can use, but not memorize, private information.
It's not a new concept -- it is already being used in many AI-based products and services -- but with TensorFlow Privacy developers will be able to more easily ensure the privacy and security of their data participants, Google said.
Google sells a number of AI products as well, including its various Cloud AI tools and services, such as Cloud Text-to-Speech and Cloud Speech-to-Text, both of which recently saw sizeable updates.