Getty Images

Why optimizing machine learning models is important

A look at why AI needs optimization and how it speeds up inferencing, helps deploy models on small devices and reduces memory footprint.

As more enterprises break into edge AI, practitioners of the fast-growing technology are trying to meet the challenges of optimizing machine learning for small devices.

At the Edge AI conference this week, experts from Ford Motor Company, Panasonic AI Lab and XMOS explored ways optimizing AI models can enable TinyML -- a set of machine learning libraries that run on 32-bit microcontrollers and don't require a lot of memory -- and edge AI.

The reason to optimize

There are numerous reasons to optimize AI models. Chief among them is that AI needs to be strong and resilient, said Harsh Bhate, research engineer at Ford Motor Company.

Bhate said autonomous technology, for example, has many constraints including latency, memory and power footprint. Without optimization, it's not possible to attain the desired results -- getting the AI to work in real time while making use of every bit of data and compute resource.

Another reason to optimize AI models is to target more models with less hardware, thereby getting the most out of ML models, said Andrew Stanford-Jason, senior technologist at AI semiconductor vendor  XMOS. This will reduce memory footprint, keep compute usage low and produce larger models.

Screenshot of panel discussion at Edge AI summit
Optimization will help organizations gain larger models, according to this panel discussion during the Edge AI Summit.

Silicon and software

Many organizations looking to optimize and deploy their TinyML or edge AI systems work with AI chips.

You can get the most out of your hardware if the software is aware of what it's targeting.
Andrew Stanford-JasonSenior technologist, XMOS

Users that deploy the silicon hardware must decide whether saving money or time is more important, Stanford-Jason said. He said it's important to decide which objective they're aiming at to determine which one is more important.

"You can get the most out of your hardware if the software is aware of what it's targeting," he said.

Stanford-Jason said that there's always the trade-off between putting more capabilities in the silicon or putting capabilities into the software tools.

"It comes down to availability," he continued. More input into the silicon means more power costs; more input into the tools means more time costs.

However, ultimately, the customer just wants their model to run, Stanford-Jason said. The vendor chooses the silicon, and the customer wants a standard interface they can put their model on.

Ford's way of optimization

As a downstream customer of silicon providers, Ford goes to the hardware providers as the last part of the ecosystem.

"A lot of our effort goes into how do I make these architectures in a way that is most optimized for any edge device, and then we can build on that and leverage the hardware providers too," he said.

Ford designed its models to be optimized for the edge and can use what hardware vendors offer.

Bhate said Ford operates along most of the boundaries of edge AI -- anything from microcontrollers running bare-metal code to an AI-optimized security operation center that runs some form of an operating system.

The future of optimization

Optimized AI models will look different in the future.

Vendors will need to make sure that optimization guarantees certain properties, and that those capabilities are powerful, said Denis Gudovskiy, senior deep learning researcher at Panasonic AI Lab.

It's also possible that in the future, developers will focus more on the control flow of ML, Stanford-Jason said.

"Without this kind of robustness introduced into the model … it's going to be hard to apply such models because it's going to be very challenging for the companies to deal with the unpredictable outcomes," he said.

Dig Deeper on AI business strategies