Sergey Nivens - Fotolia

Intel debuts new nGraph compiler for neural networks models

Built on open source software, Intel's new nGraph DNN compiler takes aim at Nvidia's advantage by making it easier to run neural networks on different kinds of hardware.

CIOs have typically turned to Nvidia-based GPUs for testing and deploying AI projects using deep neural networks models. Deep neural networks, or DNNs, are a type of artificial intelligence system that have multiple hidden layers between the input and output layers and are used to model complex nonlinear relationships. Many cite Nvidia's mature tooling, hardware lead and cuDNN library in their decisions to adopt the company's platform for their AI workloads.

Rival Intel hopes to raise its position in this nascent field with the recent release of the nGraph DNN compiler, described by the chipmaker as framework-neutral software that can take input from popular machine learning frameworks like TensorFlow, MXNet, PyTorch, and CNTK and optimize them for different types of AI hardware, including CPUs.

Enabling neural networks models to run on different frameworks is problematic, the company said, because developers "must separate the essence of the model from the performance adjustments" made for the particular hardware being used. Scott Cyphers, principal engineer for the artificial intelligence products group at Intel, said the company designed the nGraph library to reduce this kind of engineering complexity.

Analysts contend this capability may do little to change the status quo in the short term for how deep neural networks models are implemented, given Nvidia's strong lead in this area of AI. But in the long run, Intel's new open source compiler may make it easier to implement artificial intelligence products from other vendors on top of the company's neural network processors, field-programmable gate arrays and its Movidius silicon.

Promise of embedded AI

Mahesh Kashyap, Deep CognitionMahesh Kashyap

"I think it is good to see Intel open-sourcing the model compiler. It will accelerate the innovation in this field," said Mahesh Kashyap, chief digital officer at Deep Cognition, a deep learning platform provider in Irving, Texas. Deep Cognition mostly uses the built-in compilers that come with the various machine learning frameworks.

I think it is good to see Intel open-sourcing the model compiler. It will accelerate the innovation in this field.
Mahesh Kashyapchief digital officer, Deep Cognition

Kashyap agreed the nGraph is novel among the current machine learning compilers he uses because of the interoperability problems it addresses. "NGraph fits nicely underneath the deep learning frameworks, so it can be mostly invisible to the user. That means that it can be easier to integrate to existing machine learning processes," he said.

Still, Kashyap said he expects to see Nvidia dominate the high-performance computing infrastructure used for training deep neural networks models, even as the nGraph compiler shows promise in making it easier to compile and distribute models in production, particularly for embedded systems.

Intel has reported significant performance improvements on neural networks models compiled with the latest version of nGraph for Intel-based chips. But this does not reflect any significant improvement compared with alternative chip architectures, according to Kashyap.

Doubts abound

Yaron Haviv, iguazioYaron Haviv

Indeed, others see Intel's latest move as relatively insignificant in the greater scheme of making deep neural networks models work.

Yaron Haviv, founder and CTO of iguazio, an AI data management platform provider, said, "I suggest Intel focus its energy on producing a real silicon response, rather than countering Nvidia's dominance in silicon with minor software enhancements." He said he does not expect developers to use Intel's compilers on Nvidia hardware, as Nvidia's compilers will result in better performance than nGraph and maximize application performance with the vendor's silicon.

Intel may be trying to prepare for a future with an abstraction layer that will make its Nervana AI chips easier to use by discouraging developers from writing directly to Nvidia APIs. This would reduce switching costs to advanced Intel silicon in the future. "But I don't foresee many rushing to adopt it," Haviv said.

Potential for greater CPU use

Peter Wang, AnacondaPeter Wang

However, deep neural networks models sometimes depend on parceling some of the computation to the CPU to reduce the search space required to be analyzed on the GPUs. These kinds of approaches may benefit from a machine learning compiler that works across GPUs, CPUs and other types of silicon.

Peter Wang, CTO of Anaconda, which makes machine learning software said, "Intel is recognizing that deep neural networks and deep learning are primarily done on GPUs. Intel wants to make sure more computer hours are done on CPUs."

Although nGraph is not likely to close the gap with GPUs for the computationally intensive aspects of deep learning, it may open up more research on offerings that use CPUs for parts of a problem and GPUs for others. "I would not be surprised if there was quite a bit of churn and turnover in AI tools and hardware in the next five years," Wang said.

Dig Deeper on Digital transformation