AI/machine learning

Computers Evolve a New Path Toward Human Intelligence

Neural networks that borrow strategies from biology are making profound leaps in their abilities. Is ignoring a goal the best way to make truly intelligent machines?

ai-from-quanta.jpg
Credit: Kevin Hong for Quanta Magazine.

In 2007, Kenneth Stanley, a computer scientist at the University of Central Florida, was playing with Picbreeder, a website he and his students had created, when an alien became a race car and changed his life. On Picbreeder, users would see an array of 15 similar images, composed of geometric shapes or swirly patterns, all variations on a theme. On occasion, some might resemble a real object, such as a butterfly or a face. Users were asked to select one, and they typically clicked on whatever they found most interesting. Once they did, a new set of images, all variations on their choice, would populate the screen. From this playful exploration, a catalog of fanciful designs emerged.

Stanley is a pioneer in a field of artificial intelligence called neuroevolution, which co-opts the principles of biological evolution to design smarter algorithms. With Picbreeder, each image was the output of a computational system similar to a neural network. When an image spawned, its underlying network mutated into 15 slightly different variations, each of which contributed a new image. Stanley didn’t intend for Picbreeder to generate anything in particular. He merely had a hunch that he, or the public, might learn something about evolution, or about artificial intelligence.

One day, Stanley spotted something resembling an alien face on the site and began evolving it, selecting a child and grandchild and so on. By chance, the round eyes moved lower and began to resemble the wheels of a car. Stanley went with it and evolved a spiffy-looking sports car. He kept thinking about the fact that if he had started trying to evolve a car from scratch, instead of from an alien, he might never have done it, and he wondered what that implied about attacking problems directly. “It had a huge impact on my whole life,” he said. He looked at other interesting images that had emerged on Picbreeder, traced their lineages, and realized that nearly all of them had evolved by way of something that looked completely different. “Once I saw the evidence for that, I was just blown away.”

Stanley’s realization led to what he calls the steppingstone principle—and, with it, a way of designing algorithms that more fully embraces the endlessly creative potential of biological evolution.

Evolutionary algorithms have been around for a long time. Traditionally, they’ve been used to solve specific problems. In each generation, the solutions that perform best on some metric—the ability to control a two-legged robot, say—are selected and produce offspring. While these algorithms have seen some successes, they can be more computationally intensive than other approaches such as “deep learning,” which has exploded in popularity in recent years.

The steppingstone principle goes beyond traditional evolutionary approaches. Instead of optimizing for a specific goal, it embraces creative exploration of all possible solutions. By doing so, it has paid off with groundbreaking results. Earlier this year, one system based on the steppingstone principle mastered two video games that had stumped popular machine learning methods. And in a paper published last week in Nature, DeepMind—the artificial intelligence company that pioneered the use of deep learning for problems such as the game of Go—reported success in combining deep learning with the evolution of a diverse population of solutions.

Read the full story here.