AI/machine learning

Could “Mindful AI” Be the Key to Successful AI?

“Being mindful is about being intentional,” said Ahmer Inam, chief AI officer at technology consulting firm Pactera Edge. “Mindful AI is about being aware and purposeful about the intention of, and emotions we hope to evoke through, an artificially intelligent experience.”

mindful.jpg

Back in April, when the pandemic was at its peak in many parts of the world, I spoke to Ahmer Inam, the chief AI officer at Pactera Edge, a technology consulting firm in Redmond, Washington. At the time, Inam was focused on how the pandemic was wreaking havoc with AI models trained from historical data.

Last week, I caught up with Inam again. Lately, he’s been thinking a lot about why AI projects so often fail, especially in large organizations. To Inam, the answer to this problem—and to many others surrounding the technology—is something called “Mindful AI.”

“Being mindful is about being intentional,” Inam said. “Mindful AI is about being aware and purposeful about the intention of, and emotions we hope to evoke through, an artificially intelligent experience.”

OK, I admit that when he said that, I thought, it sounds kinda out there, like maybe Inam should lay off the edibles for a week or two—and Mindful AI has the ring of a gimmicky catchphrase. But the more Inam explained what he meant, the more I began to think he was on to something. (And, just to be clear, Inam did not coin the term Mindful AI. Credit should primarily go to Orvetta Sampson, the principal creative director at Microsoft, and De Kai, a professor at the University of California at Berkeley.)

Inam is arguing for a first-principles approach to AI He says that too often organizations go wrong because they adopt AI for all the wrong reasons: because the C-suite wrongly believes it’s some sort of technological silver bullet that will fix a fundamental problem in the business, or because the company is desperate to cut costs, or because they’ve heard competitors are using AI and they are afraid of being left behind. None of these are, in and of themselves, good reasons to adopt the technology, Inam says.

Instead, according to Inam, three fundamental pillars should undergird any use of AI.

First, it should “human-centric.” That means thinking hard about what human challenge the technology is meant to be solving and also thinking very hard about what the impact of the technology will be, both on those who will use it—for instance, the company’s employees—and those who will be affected by the output of any software, such as customers.

Second, AI must be trustworthy. This pillar encompasses ideas like explainability and interpretability—but it goes further, looking at whether all stakeholders in a business are going to believe that the system is arriving at good outputs.

Third, AI must be ethical. This means scrutinizing where the data used to train an AI system comes from and what biases exist in that data. But it also means thinking hard about how that technology will be used: Even a perfect facial recognition algorithm, for instance, might not be ethical if it is going to be used to reinforce a biased policing strategy. “It means being mindful and aware of our own human histories and biases that are intended or unintended,” Inam said.

Read the full column here.