The Ethics of Analytics: A New Perspective?

The Ethics of Analytics: A New Perspective?


By Frank Buytendijk

December 6, 2012

“Guns don’t shoot people, people shoot people”. It is a popular saying, and it is a hard one to argue with. A gun, for example, does not have a consciousness and has no free will; therefore, it cannot be held responsible for its actions, because it only does what it is designed to do. At the same time, I find this reasoning too easy. A gun may not be a moral agent, but it certainly has a moral imprint. If a tool is designed to do harm — or if there are known malfunctions that can cause harm — the designers and engineers have a certain moral responsibility[1].

It doesn’t take a lot of imagination to translate this principle to business analysts: they are responsible for the business questions they ask, the conclusions they derive from the data, and the implementation of those analytic results. Strangely enough, this insight is not obvious; many business analysts treat their work as amoral. All they do is discover “truth” and “opportunity”, and it is up to the managers to decide what to do with it. Again, this is too easy. Analytics, like guns, also have a moral imprint.

In early 2012, the New York Times published (and quoted many times) a story highlighting an example of Target supermarkets’ ability to predict pregnancy from shopping behavior[2]. In Minneapolis a man walked into a Target supermarket upset that his 16-year-old daughter had received discount coupons for pregnancy-related articles. It turned out the system was right, and his daughter was indeed pregnant. What a way to find out you’re going to be a grandfather!

American Express has a handle on the topic, as well. A few years ago they called my house and my wife answered, however they insisted they talk to me alone. I called AMEX from abroad and it turned out it was the fraud department attempting to contact me. Their analytics found line items describing female luxury brand items on my corporate credit card, and wanted to verify these purchases with me. Indeed, it turned out to be fraud, but I appreciated their discrete attitude nevertheless.

The question on responsibility is broader than privacy concerns alone. Let’s consider a more fundamental (and not entirely fictional) example.

A retailer has an automatic data mining solution in place. It crawls through the data warehouse and other data sources, and it sends an email every morning with its most interesting findings. One morning it sends an email that highlights a correlation between the age of customers and shoplifting: in short, “old people steal”. I use this example in workshops often, and most people feel there is no issue with this email, so long as it is statistically valid. It is what you do with the knowledge, and if you feel what the computer suggested is not appropriate, you can put it away. The problem is, you can’t. You can’t really undo knowledge, and not reacting is also a reaction. Moreover, assume that that the newspaper finds out about the email and calls for a statement. Your reaction of “Perhaps, but we didn’t do anything with the information” is just not going to fly. But most important, it is not relevant whether you are okay with the information or not. What is relevant is that modern technology answers questions you didn’t even ask. There is no logical place in this automated process to ask yourself whether certain questions are appropriate or not. You can specify which questions you are not allowed to ask because of legal restrictions, but how do you tell the system the things you don’t know that you don’t want to know? This issue is of paramount philosophical importance, and I believe organizations should carefully consider the consequences.

Another pressing issue is identity theft. Organizations in commercial enterprise and the public sector spend many millions to integrate all kinds of data sources, for instance, for profiling. Public sector organizations may be interested in crime prevention and increasing security, while commercial enterprise seeks business opportunity. What happens if someone steals your identity? There are cases already in which identity theft has proven to be impossible to correct. A myriad of interfaces allows the stolen identity to keep popping up from unexpected places and routinely replacing corrections with the old, wrong information. The consequences of identity fraud go way beyond your standard business embarrassment: people’s lives can literally be ruined.

You see, from the moment you start thinking about it, analytics are full of ethical issues. Not just privacy issues, and also not traditional risk management. Like with most issues of philosophical nature, there are no definitive answers: it is hard to determine what knowledge will bring harm. It just seems that some things unexpectedly blow up in your face. Understandable afterwards, but unanticipated before. It may also not be possible (or practical) to anticipate every potential consequence of an analytical exercise, and I am not suggesting doing this. I do believe it is the responsibility of business analysts to be aware there are ethical issues, and to actively debate them. So if things do go wrong, you can show you have followed due process, and you can look yourself in the face. You tried to do the right thing, and you considered what could reasonably go wrong.

Ethical issues need to be discussed within organizations, but I think they require a public debate, too. Ethical issues touch all stakeholders. Customers and citizens are affected by the power of technology that corporates, large and small, use. Brands are not only responsible for themselves, but also for their suppliers upstream and downstream the value chain.  Shareholder capital is at risk. Regulators may even want a say in what organizations can and can’t do with the data they have access to. Public debate should prevent analytics-gone-wild, or else it will go wrong. CEOs of highly visible companies will have step down or take strong corrective action, and public officials will have to answer to parliamentary questions or even be charged because of negligence. Without a debate, the only question is when this will happen. As grim as this prediction may be, if that is what it takes to get a public debate, I’ll settle for it.

One more thing. Immanual Kant felt that in decision-making, people should ask themselves if they would want everyone to make that same decision if they were in the same situation, a principle he called the “categorical imperative”. Kant would not buy the argument of risk management and following due process. Kant would have stressed the “look yourself in the face” argument. In analytics and decision-making you shouldn’t do — or not do — things because you are afraid someone will find out and that will damage you. Instead, you should do or not do things, because you think it is right. My ambition for the world of analytics is not so high, I am afraid. If “risk management” is the trigger to the debate, I’ll settle for that, too.

Frank Buytendijk ( has been a thought leader in the business intelligence and analytics space for the last twenty-or-so years. Frank is the author of countless articles and five books, including his latest book Socrates Reloaded. Through a series of essays, Socrates Reloaded examines the relationship between philosophy and IT. It has received great reviews — “helps you focus and adjust your priorities in new ways you hadn’t thought of”, “this brilliant book is Frank at his best”, “easy read and entertaining”, and “well researched” – and is available via or on the Radiant eBookshelf. You can connect with the author on Twitter at @FrankBuytendijk or by email at

Download the full article here


[1] There are many shades of gray. Every object can be used to do harm. You can harm people with a baseball bat or even with an alarm clock. The key element in determining responsibility is in “intent”. Is a product designed to do harm, or any known design mistakes may lead to harm fit under this label.


1 Comment
  1. Based on where you started this article, I expected it to be an emotional plea full of rhetoric and hyperbole. However, I found myself agreeing with your very logical perspective on the risks of big analytics. Peter Parker’s uncle said it succinctly, “With great power comes great responsiblity.”

    Unfortunately, I’m afraid that this warning will fall on deaf ears (at least here in America) and the only recourse seems to be “great regulation” and “great retribution.” You make a great case, but have we destroyed any moral foundation for responsible action though Existentialism and Situational Ethics? Kant’s imperative for duty and universality of a moral code fly in the face of the current philosophical bent of society as a whole.

    I do still have hope, but it doesn’t come from more regulation. Only when we recognize the fundamental flaw of Existentialism (our senses lie and our emotions are far too fickle) will we return to seeking universal truth over “personal truth.” At least, that’s “my truth.” ;-)

Leave a Reply

Recent News

Get In Touch

Boulder, CO USA

About Radiant

Radiant Advisors is a leading strategic research and advisory firm that delivers innovative, cutting-edge research and thought-leadership to transform today's organizations into tomorrow's data-driven industry leaders.