ADVERTISEMENT


The AI Delusion: Why Humans Trump Machines

Credit: Erik Dinnel/Allen Institute.
Scientists at The Allen Institute for Brain Science in Seattle.

As well as playing a key role in cracking the Enigma code at Bletchley Park during the Second World War, and conceiving of the modern computer, the British mathematician Alan Turing owes his public reputation to the test he devised in 1950. Crudely speaking, it asks whether a human judge can distinguish between a human and an artificial intelligence based only on their responses to conversation or questions. This test, which he called the “imitation game,” was popularised 15 years later in Philip K. Dick’s science-fiction novel Do Androids Dream of Electric Sheep? But Turing is also widely remembered as having committed suicide in 1954, quite probably driven to it by the hormone treatment he was instructed to take as an alternative to imprisonment for homosexuality (deemed to make him a security risk), and it is only comparatively recently that his genius has been afforded its full due. In 2009, Gordon Brown apologized on behalf of the British government for his treatment; in 2014, his posthumous star rose further again when Benedict Cumberbatch played him in The Imitation Game; and in 2021, he will be the face on the new £50 note.

He may be famous now, but his test is still widely misunderstood. Turing’s imitation game was never meant as a practical means of distinguishing replicant from human. It posed a hypothetical scenario for considering whether a machine can “think.” If nothing we can observe in a machine’s responses lets us tell it apart from a human, what empirical grounds can we adduce for denying it that capacity? Despite the futuristic context, it merely returns us to the old philosophical saw that we can’t rule out the possibility of every other person being a zombie-like automaton devoid of consciousness but very good at simulating it. We’re back to Descartes’ solipsistic axiom "cogito ergo sum": in essence, all I can be sure of is myself.

Researchers in artificial intelligence (AI) today don’t set much store by the Turing Test. In some circumstances, it has been surpassed already. It’s not an unfamiliar experience today to wonder whether we’re interacting online with a human or an AI system, and even alleged experts have been taken in by bots such as “Eugene Goostman,” which, posing as a Ukrainian teenager, fooled a panel of Royal Society judges in 2014 into thinking it was human. Six years on, that sort of stunt is unfashionable in serious AI, regarded as being beside the point.

As Gary Marcus and Ernest Davis explain in Rebooting AI, the reason we might want to make AI more human-like is not to simulate a person but to improve the performance of the machine. Trained as a cognitive scientist, Marcus is one of the most vocal and perceptive critics of AI hype, while Davis is a prominent computer scientist; the duo are perfectly positioned to inject some realism into this hyperbole-prone field.

Most AI systems used today—whether for language translation, playing chess, driving cars, face recognition, or medical diagnosis—deploy a technique called machine learning. So-called “convolutional neural networks,” a silicon-chip version of the highly interconnected web of neurons in our brains, are trained to spot patterns in data. During training, the strengths of the interconnections between the nodes in the neural network are adjusted until the system can reliably make the right classifications. It might learn, for example, to spot cats in a digital image or to generate passable translations from Chinese to English.

Although the ideas behind neural networks and machine learning go back decades, this type of AI really took off in the 2010s with the introduction of deep learning: in essence adding more layers of nodes between the input and output. That’s why DeepMind’s program AlphaGo is able to defeat expert human players in the very complex board game Go, and Google Translate is now so much better than in its comically clumsy youth (although it’s still not perfect).

In Artificial Intelligence, Melanie Mitchell delivers an authoritative stroll through the development and state of play of this field. A computer scientist who began her career by persuading cognitive-science guru Douglas Hofstadter to be her doctoral supervisor, she explains how the breathless expectations of the late 1950s were left unfulfilled until deep learning came along. She also explains why AI’s impressive feats to date are now hitting the buffers because of the gap between narrow specialization and human-like general intelligence.

The problem is that deep learning has no way of checking its deductions against “common sense” and so can make ridiculous errors. It is, say Marcus and Davis, “a kind of idiot savant, with miraculous perceptual abilities, but very little overall comprehension.” In image classification, not only can this shortcoming lead to absurd results but the system can also be fooled by carefully constructed “adversarial” examples. Pixels can be rejigged in ways that look to us indistinguishable from the original but which AI confidently garbles, so that a van or a puppy is declared an ostrich. By the same token, images can be constructed from what looks to the human eye like random pixels but which AI will identify as an armadillo or a peacock.

Read the full story here.


STAY CONNECTED

Don't miss out on the latest technology delivered to your email monthly.  Sign up for the Data Science and Digital Engineering newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.

 

ADVERTISEMENT


POPULAR NOW

ADVERTISEMENT

ADVERTISEMENT