How far are we from artificial general intelligence?

Developers and researchers are currently debating the extent to which artificial general intelligence needs to mimic the human brain. Explore the two schools of thought.

How far are we from artificial general intelligence? And if we ever see true AGI, will it operate similar to the human brain, or could there be a better path to building intelligent machines?

Since the earliest days of artificial intelligence -- and computing more generally -- theorists have assumed that intelligent machines would think in much the same ways as humans. After all, we know of no greater cognitive power than the human brain. In many ways, it makes sense to try to replicate it if the goal is to create a high level of cognitive processing.

However, there is a debate today over the best way of reaching true general AI. In particular, recent years' advancements in deep learning -- which is itself inspired by the human brain, though diverges from it in some important ways -- have shown developers that there may be other paths.

What is artificial general intelligence?

For many, AGI is the ultimate goal of artificial intelligence development. Since the dawn of AI in the 1950s, engineers have envisioned intelligent robots that can complete all kinds of tasks -- easily switching from one job to the next. AGI would be able to learn, reason, plan, understand natural human language and exhibit common sense.

In short, AGI would be a machine that is capable of thinking and learning much in the same way that a human is. It would understand situational contexts and be able to apply things it learned about completing one task to other tasks. 

What is the current state of AGI?

We're still a long way from realizing AGI. Today's smartest machines fail completely when asked to perform new tasks. Even young children are easily able to apply things they learn in one setting to new tasks in ways that the most complex AI-powered machines can't. 

Researchers are working on the problem. There are a host of approaches, mainly focused around deep learning, that aim to replicate some element of intelligence. Neural networks are generally considered state-of-the-art when it comes to learning correlations in sets of training data. Reinforcement learning is a powerful tool for teaching machines to independently figure out how to complete a task that has clearly prescribed rules. Generative adversarial networks allow computers to take more creative approaches to problem solving.

But there are few approaches that combine some or all of these techniques. This means today's AI applications can only solve narrow tasks, and that leaves us far from artificial general intelligence.

How a more human-like approach to AGI might look

Gary Marcus, founder and CEO of Robust.ai, a company based in Palo Alto, Calif., that is trying to build a cognitive platform for a range of bots, is a proponent of AGI having to work more like a human mind. Speaking at the MIT Technology Review's virtual EmTech Digital conference, he said today's deep learning algorithms lack the ability to contextualize and generalize information, which are some of the biggest advantages to human-like thinking.

Marcus said he doesn't specifically think machines need to replicate the human brain, neuron for neuron. But there are some aspects of human thought, like using symbolic representation of information to extrapolate knowledge to a broader set of problems, that would help achieve more general intelligence.

"[Deep learning] doesn't work for reasoning or language understanding, which we desperately need right now," Marcus said. "We can train a bunch of algorithms with labelled data, but what we need is deeper understanding."

We can train a bunch of algorithms with labelled data, but what we need is deeper understanding.
Gary MarcusFounder and CEO, Robust.ai

The reason why deep learning struggles to reason or generalize information is that algorithms only know what they've been shown. It takes thousands or even millions of labelled photos to train an image recognition model. And even after all that, the model is unable to perform different tasks like natural language understanding.

In spite of its limitations, Marcus doesn't advocate moving away from deep learning. Instead, he says, developers should look for ways to combine deep learning with classical approaches to AI. These include more symbolic interpretations of information, like knowledge graphs. Knowledge graphs contextualize data -- connecting pieces of information that are semantically related -- while also using deep learning models to understand how people interact with information and make improvements over time.

"We need to stop building AI for ad tech and news feeds, and start building AI that can make a real difference," Marcus said. "To get to that place you have to build systems that have deep understanding, not just deep learning."

The case for deep learning

However, not everyone agrees. Speaking at the conference, Danny Lange, vice president of AI and machine learning at Unity Technologies, a video game software development company, said efforts to replicate human-like thinking could unintentionally limit what machines are capable of learning. Deep learning models operate on fundamentally different tracks than the human brain and, given enough data and compute power, there's no telling how far they may be able to go. While we are still far from artificial general intelligence, deep learning could potentially get us there.

"What I appreciate about deep learning is that if you feed it enough data, it is able to learn abstractions that we as humans are not able to interpret," Lange said.

One area of deep learning in particular, reinforcement learning, could be a promising path toward more general intelligence. Lange said these algorithms do operate somewhat more like natural thought when it comes to learning new tasks. And there have been examples in synthetic environments that have shown some ability to generalize learnings from one task to another.

He also believes that developers could speed up the training of deep learning models, which is currently one of the biggest hurdles. He'd like to see efforts to optimize the data sets that models train on, so that algorithms don't need to see millions of examples of something to learn what it is. This idea is still developing, but Lange thinks it could take deep learning farther than it can currently go.

"We have limited data and compute power today," he said. "But we haven't gone very far with deep learning yet."

Dig Deeper on AI technologies