The Causes of AI’s Ethical Problems

Photo by Ciprian Boiciuc on Unsplash

AI is a marvellous tool that has the potential to make our lives safer, healthier and happier. But as the influence of AI in our decisions reaches wider and deeper into our daily lives, it’s becoming ever clearer that there are creases to iron out before it can fully deliver on this promise. Until it addresses these problems, it may do more harm than good. A lot of the public are suspicious of how AI is being used, and they’re right to be.

The reasons for AI’s ethical crisis are many and complex. With many of these reasons, it’s easy to see why they can easily lead to bad ethical consequences. For one thing, there’s the problem of biased datasets: If your data misrepresents the actual variation in the world, any ML model you train on that data is going to inherit that prejudice. It’s hard to produce unbiased data in a biased world, though research is being done to figure out how to scrub the bias out of datasets in principled ways. For another thing, a lack of diversity and inclusion in the field means that social blind spots can lead AI projects astray at various points in their lifecycle.

But there’s another source of AI’s ethical crisis whose ethical character is less obvious: So far, it has an uneasy relationship with causality. Now, it’s no news that data science has a long way to go to incorporate causal thinking into its methods — prominent researchers like Judea Pearl have been trying to convince the field of the urgency of this challenge for years now. But something that might not be immediately obvious is what causality has to do with the specific ethical challenges that AI faces. This is something I’ll explore here.

Causality and Data

First, what exactly is causality and why does it matter to data science? (For a richer discussion, see this series of posts). The second question is much easier to answer than the first. Data professionals are intensely interested in correlations: when training a machine learning model, we’re trying to find features that correlate with the target variable either individually or in some combination. But from day one of our training, we’re reminded that correlation doesn’t imply causation: you can’t infer that A causes B from the fact that A correlates with B. We’re reminded of various reasons for this, such as the existence of confounding variables and spurious correlations. But that raises the question: if correlation doesn’t imply causation, what does? This question is important because causality matters to us: Usually, we don’t just want to know what will happen — we want to know why it happens. For example, we don’t just want to know who will contract a disease and when. We want to know why people contract disease and, critically, what we can do about it.

According to Pearl, no amount of standard statistical information, however elaborate, can fully answer questions like these because it deals only with observations: it focuses on questions about, say, the probability of B given that we observe A. To make sense of causal relationships, we need to talk about the probability of B if we do A. In short, we need to add to our toolkit a new type of logic — the logic of interventions. Pearl and others have developed a sophisticated set of tools for how to encode causal information into automated reasoning using this logic of interventions, and how to make educated guesses about causality using observational data. (For Python users who’d like to venture into this brave new world of cause-savvy data science, Adam Kelleher has been developing a pandas extension designed to support causal inferences.)

Three Connections Between Causality and Ethics

With that in mind, what does all this talk of causality have to do with AI’s ethics problems? Well, for one thing, there’s no way for any discussion about ethics to get off the ground without it. The reason is simple: ethics asks questions about what we should do, and those questions only make sense if you assume that what we do makes a difference. Since causality is precisely about difference-making, a clear sense of the world’s causal structure is the vital connective tissue that links our ethical decisions and our values to our material reality. Without a sense of what would be different if we did A instead of B, there’s no reason to think about what to do at all.

Secondly, many of the ways that data science can lead to bad ethical consequences stem from bad causal reasoning. This goes further than just the well-known pitfall of inferring causation from correlation. Take the common pitfall of selection bias, for example. A dataset is subject to selection bias when it’s implicitly restricted to a narrow portion of the reality we want to understand. This causes a problem, in part, because causal relationships in the world underwrite phenomena of conditional dependence and conditional independence. Two variables with a common cause can fail to look statistically dependent — even though they are — if the dataset implicitly controls for that common-cause variable. The opposite applies to common effects: a dataset can show a correlation between two variables that are actually unrelated if the dataset is restricted to a common effect of those variables. Because of this, we run the risk of failing to spot a relationship that in fact exists, or of falsely inferring a statistical relationship that doesn’t exist — at least, not outside the scope of the data. These sort of mistakes that can machine learning algorithms to generate inaccurate and potentially harmful predictions.

Thirdly, causal concepts may turn out to be a critical ingredient in operationalizing ethical concepts like bias and fairness. To “operationalize” something is to take an idea that’s very important but which lacks a clear and agreed meaning — like “privacy” or “diversity” — and engineering a concept based on it that’s precise and measurable. While the operationalized concept may not capture everything about the original idea — and so shouldn’t be taken to replace it — it gives practitioners something to work with.

As it happens, some important proposals for operationalizing ethical concepts for AI purposes involve causality in an important way. Examples include explainability and fairness. In the case of explainability, one proposed way to understand what it means for an AI’s decisions to be explainable is that we can understand what caused it to make the decision it did, and so what might have led it to make a different decision. In the case of fairness, one proposed definition is a counterfactual one: it takes a decision about a person to be fair if the algorithm would have made the same decision even if we changed the person’s demographic (say, their race or gender) while keeping everything else the same; in short, if the protected characteristic makes no difference to the decision. This definition tries to bring out the intuition that fair treatment isn’t just what decisions we make, but about why we make those decisions.

Social Feedback

Finally, another way that causality matters to AI ethics is that decisions about data can have a kind of causal feedback based on the effect those decisions have on the societies they affect. Take the case of Goodhart’s law — the principle that any proxy measure ceases to be reliable once people start treating it as their direct goal. For example, recommendation algorithms on social media might take measures such as Likes or keywords as an indicator of quality or relevance. But as soon as content creators figure out what the algorithms look for, they can start to tailor the content to maximize those things rather than focus directly on the content’s quality. One way to view this problem is as a causal one: First, we have a measure that we take to be a reliable proxy for some other thing. But once people start trying to maximize that measure, their actions intervene on the system in a way that breaks the relationship between the proxy and target that the measure relied on. In short, our decisions about what to measure end up changing the system in ways that undermine those decisions.

Conclusion

I’ve outlined a few reasons why thinking about causes is critical to data scientists, not just for practical reasons but for specifically ethical ones as well. As data professionals, it’s vital to remember that we sit in a critical causal nexus in society. The work we do matters more than ever, which is one of the reasons we’re passionate about it. This power comes with the responsibility to think carefully about what affects our decisions, and the way our decisions affect the rest of the world. A solid understanding of causality, not just of statistics, is an essential part of the ethical data professional’s arsenal.

A former philosopher turned data scientist. Interested in causality, NLP, and the promise and dangers of AI.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store