AI/machine learning

An AI Pioneer Wants His Algorithms To Understand the "Why"

Deep learning is good at finding patterns in reams of data but can't explain how they're connected. Turing Award winner Yoshua Bengio wants to change that.

artificial-intelligence-badge.jpg

In March, Yoshua Bengio received a share of the Turing Award, the highest accolade in computer science, for contributions to the development of deep learning—the technique that triggered a renaissance in artificial intelligence, leading to advances in self-driving cars, real-time speech translation, and facial recognition.

Now, Bengio says deep learning needs to be fixed. He believes it won’t realize its full potential, and won’t deliver a true AI revolution, until it can go beyond pattern recognition and learn more about cause and effect. In other words, he says, deep learning needs to start asking why things happen.

The 55-year-old professor at the University of Montreal, who sports bushy gray hair and eyebrows, says deep learning works well in idealized situations but won’t come close to replicating human intelligence without being able to reason about causal relationships. “It’s a big thing to integrate [causality] into AI,” Bengio said. “Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life, it is often not the case.”

Machine learning systems including deep learning are highly specific, trained for a particular task, such as recognizing cats in images or spoken commands in audio. Since bursting onto the scene around 2012, deep learning has demonstrated a particularly impressive ability to recognize patterns in data; it’s been put to many practical uses, from spotting signs of cancer in medical scans to uncovering fraud in financial data.

But deep learning is fundamentally blind to cause and effect. Unlike a real doctor, a deep learning algorithm cannot explain why a particular image may suggest disease. This means deep learning must be used cautiously in critical situations.

Understanding cause and effect would make existing AI systems smarter and more efficient. A robot that understands that dropping things causes them to break would not need to toss dozens of vases onto the floor to see what happens to them.

Bengio says the analogy extends to self-driving cars. “Humans don't need to live through many examples of accidents to drive prudently,” he said. They can just imagine accidents, “in order to prepare mentally if it did actually happen.”

The question is how to give AI systems this ability.

Read the full story here.