AI/machine learning

Can Neural Networks Show Imagination? DeepMind Thinks They Can

Incorporating imagination into AI agents has long been an elusive goal of researchers in the space. Imagine AI programs that are able not only to learn new tasks but also to plan and reason about the future.

imagination.jpg

Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence (AI) space. Recently, I’ve been spending time on a couple of scenarios that relate to imagination in deep learning systems that reminded me of a very influential paper Alphabet’s subsidiary DeepMind published last year in this subject.

Imagination is one of those magical features of the human mind that differentiate us from other species. From the neuroscience standpoint, imagination is the ability of the brain to form images or sensations without any immediate sensorial input. Imagination is a key element of our learning process as it enable us to apply knowledge to specific problems and better plan for future outcomes. As we execute tasks in our daily lives, we are constantly imagining potential outcomes in order to optimize our actions. Not surprisingly, imagination is often perceived as a foundational enabler of planning from a cognitive standpoint.

Incorporating imagination into AI agents has long been an elusive goal of researchers in the space. Imagine (very appropriately) AI programs that are able not only to learn new tasks but also to plan and reason about the future. Recently, we have seen some impressive results in the area of adding imagination to AI agents in systems such as AlphaGo. It has been precisely the DeepMind team who has been helping to formulate an initial theory of imagination-augmented AI agents. Last year, it published a new revision of a famous research paper that outlined one of the first neural network architectures to achieve this goal.

Deep reinforcement learning (RL) is often seen as the hallmark of imagination-augmented AI agents because it attempts to correlate observations with actions. However, deep RL systems typically require large amounts of training that results in knowledge tailored to a very specific set of tasks in an environment. The DeepMind paper proposes an alternative to traditional models by using models that use environment simulations to learn to “interpret” imperfect predictions. The idea is to have parallel models that use simulations to extract useful knowledge that can be used in the core model. Just like we often judge the level of imagination of an individual ("that guy has no imagination") we can see the imagination models as an augmented capability of deep learning programs.

Read the full story here.