AI/machine learning

Bias Can Cause Machine Learning To Stumble

Machine learning (ML) finds patterns in data. "AI bias" means that it might find the wrong patterns. Meanwhile, the mechanics of ML might make this hard to spot.

machineintelligence101.jpg
  • Machine learning (ML) finds patterns in data. "AI bias" means that it might find the wrong patterns—a system for spotting skin cancer might be paying more attention to whether the photo was taken in a doctor’s office. ML doesn’t "understand" anything—it just looks for patterns in numbers, and if the sample data isn’t representative, the output won’t be either. Meanwhile, the mechanics of ML might make this hard to spot.
     
  • The most obvious and immediately concerning place that this issue can come up is in human diversity, and there are plenty of reasons why data about people might come with embedded biases. But it’s misleading, or incomplete, to think that this is only about people—exactly the same issues will come up if you’re trying to spot a flood in a warehouse or a failing gas turbine. One system might be biased around different skin pigmentation, and another might be biased against Siemens sensors.
     
  • Such issues are not new or unique to machine learning—all complex organizations make bad assumptions and it’s always hard to work out how a decision was taken. The answer is to build tools and processes to check, and to educate the users—make sure people don’t just "do what the AI says." Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.

Machine learning is one of the most important fundamental trends in tech today, and it’s one of the main ways that tech will change things in the broader world in the next decade. As part of this, there are aspects to machine learning that cause concern—its potential effect on employment, for example, and its use for purposes that we might consider unethical, such as new capabilities it might give to oppressive governments. Another, and the topic of this post, is the problem of AI bias.

It’s not simple. 

What is AI Bias?

“Raw data is both an oxymoron and a bad idea;
to the contrary, data should be cooked, with care.”

—Geoffrey Bowker

Until about 2013, If you wanted to make a software system that could, say, recognize a cat in a photo, you would write logical steps. You’d make something that looked for edges in an image, and an eye detector, and a texture analyzer for fur, and try to count legs, and so on, and you’d bolt them all togetherand it would never really work. Conceptually, this is rather like trying to make a mechanical horseit’s possible in theory, but, in practice the complexity is too great for us to be able to describe. You end up with hundreds or thousands of hand-written rules without getting a working model.

With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.

However, there’s a catch. In the real world, your thousand (or hundred thousand, or million) examples of X and Y also contain A, B, J, L, O, R, and P. Those may not be evenly distributed, and they may be prominent enough that the system pays more attention to L and R than it does to X.

What does that mean in practice? My favorite example is the tendency of image recognition systems to look at a photo of a grassy hill and say "sheep." Most of the pictures that are examples of sheep were taken on grassy hills, because that’s where sheep tend to live, and the grass is a lot more prominent in the images than the little white fluffy things, so that’s where the systems place the most weight.

A more serious example came up recently with a project to look for skin cancer in photographs. It turns out that dermatologists often put rulers in photos of skin cancer, for scale, but that the example photos of healthy skin do not contain rulers. To the system, the rulers (or rather, the pixels that we see as a ruler) were just differences between the example sets, and sometimes more prominent than the small blotches on the skin. So, the system that was built to detect skin cancer was, sometimes, detecting rulers instead.

A central thing to understand here is that the system has no semantic understanding of what it’s looking at. We look at a grid of pixels and translate that into sheep, or skin, or rulers, but the system just sees a string of numbers. It isn’t seeing 3D space, or objects, or texture, or sheep. It’s just seeing patterns in data.

Meanwhile, the challenge in trying to diagnose issues like this is that the model your machine learning system has generated (the neural network) contains thousands or hundreds of thousands of nodes. There is no straightforward way to look inside the model and see how it’s making the decisionif you could, then the process would be simple enough that you wouldn’t have needed ML in the first place and you could have just written the rules yourself. People worry that ML is a "black box." (As I explain later, however, this issue is often hugely overstated.)

This, hugely simplified, is the "AI bias" or "machine learning bias" problem: A system for finding patterns in data might find the wrong patterns, and you might not realize. This is a fundamental characteristic of the technology, and it is very well-understood by everyone working on this in academia and at large tech companies, but its consequences are complex and our potential resolutions to those consequences are also complex.

Read the full story here.