AI/machine learning

Learning by Forgetting: Deep Neural Networks and the Jennifer Aniston Neuron

Research shows how to understand the role of individual neurons in a neural network.

anistonneuron.jpeg

Have you ever heard about the Jennifer Aniston neuron? In 2005, a group of neuroscientists led by Rodrigo Quian Quiroga published a paper detailing his discovery of a type of neuron that steadily fired whenever the person was shown a photo of Jennifer Aniston. The neuron in question was not activated when presented with photos of other celebrities. Obviously, we don’t all have Jennifer Aniston neurons and those specialized neurons can be activated in response to pictures of other celebrities. However, the Aniston neuron has become one of the most powerful metaphors in neuroscience to describe neurons that focus on a very specific task.

The fascinating thing about the Jennifer Aniston neuron is that it was discovered while Quiroga was researching areas of the brain that caused epileptic seizures. It is well known that epilepsy causes damages across different areas of the brain, but determining those specific areas is still an active area of research. Quiroga’s investigation into damaged brain regions led to the discovery in the functionality of other neurons. Some of Quiroga’s thought are brilliantly captured in his recent book The Forgetting Machine.

Extrapolating Quiroga’s methodology to the world of deep learning, data scientists from DeepMind published a paper that proposes a technique to learn about the effect of specific neurons in a neural network by causing damages to it. Sounds crazy? Not so much. In software development as in neuroscience, simulating arbitrarily failure is one of the more powerful methods to understand the functionality of code. DeepMind’s new algorithm can be seen as a version of chaos monkey for deep neural networks.

How does the DeepMind neuron deletion method really work? Very simply, the algorithm randomly deletes groups of neurons in a deep neural network and tries to understand their specific effect by running the modified network against the trained data set.

When evaluating DeepMind’s new technique in image recognition scenarios, it produced some surprising results.

Although many previous studies have focused on understanding easily interpretable individual neurons (e.g., “cat neurons,” or neurons in the hidden layers of deep networks that are only active in response to images of cats), DeepMind found that these interpretable neurons are no more important than confusing neurons with difficult-to-interpret activity.

Networks that correctly classify unseen images are more resilient to neuron deletion than networks that can only classify images they have seen before. In other words, networks that generalize well are much less reliant on single directions than those that memorize.

Read the full story here.