Why Hasn’t AI Changed the World Yet?

Credit: BBC.
Kursat Ceylan uses the smart cane that he codeveloped.

When Kursat Ceylan, who is blind, was trying to find his way to a hotel, he used an app on his phone for directions but also had to hold his cane and pull his luggage.

He ended up walking into a pole, cutting his forehead.

This inspired him to develop, along with a partner, Wewalk—a cane equipped with artificial intelligence (AI) that detects objects above chest level and pairs with apps including Google Maps and Amazon's Alexa so the user can ask questions.

Jean Marc Feghali, who helped to develop the product, also has an eye condition. In his case, his vision is severely impaired when the light is not good.

While the smart cane itself only integrates with basic AI functions right now, the aim is for Wewalk to use information gathered from the gyroscope, accelerometer, and compass installed inside the cane. It will use that data to understand more about how visually impaired people use the product and behave in general to create a far-more-sophisticated product using machine learning, an advanced form of AI.

This would include creating an AI voice service with Microsoft specifically designed for visually impaired people and eventually allowing the device to integrate with other Internet-connected devices.

"It isn't just meant to be a smart cane, it's meant to be connected with transport networks and autonomous vehicles," Feghali said. The idea is that Wewalk could interact with traffic lights to help people cross roads without needing to push a button and could alert a bus to wait at a specific stop ahead of time.

Such innovations would be welcome, but perhaps fall short of the dreams originally inspired by AI. When the field emerged at the end of the 20th century, it was hoped that computers would be able to operate on their own with human-like abilities—a capability known as generalized AI.

"Back in the 1970s, there were predictions that, by 2020, we should have generalized AI by now. We should have been having some moon and Mars bases, and we're nowhere near that," said Aditya Kaul, research director at Omdia.

Progress has been picking up in recent years as artificial neural networks have become more sophisticated.

Inspired by the way the brain forms connections and learns, artificial neural networks are layers of complex equations known as algorithms that are fed data until they learn to recognize patterns and draw their own conclusions, a process known as deep learning.

In 2012, Kaul explains, a neural-network framework known as AlexNet emerged, which started a deep-learning revolution.

"That has led to a number of different innovations, from facial recognition to voice and speech recognition, as well as, to some extent, what you see on Netflix or Amazon in personalizing and predicting what you want to watch or buy," he said.

Read the full story here.


Don't miss out on the latest technology delivered to your email monthly.  Sign up for the Data Science and Digital Engineering newsletter.  If you are not logged in, you will receive a confirmation email that you will need to click on to confirm you want to receive the newsletter.