Hybrid AI systems are quietly solving the problems of deep learning

Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.

This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.

The question is, what is the path forward?

At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.

But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.

Read: [Deep learning advances are boosting computer vision — but there’s still clear limits]

Connectionists, the proponents of pure neural network-based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.

But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds.

What’s missing in deep neural networks?

The limits of deep learning have been comprehensively discussed. But here, I would like to talk about the generalization of knowledge, a topic that has been widely discussed in the past few months. While human-level AI is at least decades away, a nearer goal is robust artificial intelligence.

Here’s how Marcus defines robust AI: “Intelligence that, while not necessarily superhuman or self-improving, can be counted on to apply what it knows to a wide range of problems in a systematic and reliable way, synthesizing knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult.”

Those are key features missing from current deep learning systems. Deep neural networks can ingest large amounts of data and exploit huge computing resources to solve very narrow problems, such as detecting specific kinds of objects or playing complicated video games in specific conditions.

However, they’re very bad at generalizing their skills. “We often can’t count on them if the environment differs, sometimes even in small ways, from the environment on which they are trained,” Marcus writes.

Case in point: An AI trained on thousands of chair pictures won’t be able to recognize an upturned chair if such a picture was not included in its training dataset. A super-powerful AI trained on tens of thousands of hours of StarCraft 2 gameplay can play at a championship level, but only under limited conditions. As soon as you change the map or the units in the game, its performance will take a nosedive. And it can’t play any game that is similar to StarCraft 2, such as Warcraft or Command & Conquer.

AI AlphaStar StarCraft II