listen. perspective
Perspective

Can You Quantify Morality?

Mark McDivitt | State Street Corporation

October 17,2017

We can quantify just about anything: how the weather impacts investor behavior.

How board diversity improves a company's overall performance. How supermarket product prices relate to inflation. And we're only getting more granular. For instance, Environmental, Social and Governance (ESG) data is no longer just an overall score, generated by a room of quants. ESG reports can now define materiality, understand motives and biases, and account for subjectivity.

To me, ESG is about using our money to effect change. Companies that support wage equality, want to end child labor and unsafe working conditions, or strive to combat climate change (the earth is our biggest asset, after all) are all doing "good" work. How we invest our money determines if many of those initiatives/funds/projects live or die. Our ability to catalog and quantify which companies or funds are doing this good work with our money, versus those that are neutral or are actively undermining our personal moral compass, creates a level of transparency that only helps investors. For example, satellite imagery combined with data on rising temperatures tells us exactly which real estate or infrastructure investments are at risk due to rising sea levels in the next 25 years. It’s both morally and fiscally responsible to do something about that.

But even with all the advancements we've made in ESG data analysis, especially with unstructured data, our humanity remains hard to quantify, particularly when it comes to our morals. After all, we all have different ideas of what needs fixing. How exactly are you supposed to program for "good"?

For instance, to whom does a driverless car owe fiduciary responsibility? Should it protect the driver at all costs? The driver would certainly agree. But if someone else is injured, is the driver then liable? My guess is the driver would be quick to blame the car. So is the auto manufacturer on the hook instead? Or is it the company that programmed the driving system? As much as we may try not to, our own human biases can be programmed during the machine learning process. This is known as "algorithmic bias1." So someone had to tell that car how to act in a given situation — was every risk scenario accounted for?

To whom does a driverless car owe fiduciary responsibility?

That may seem like an extreme example, but morality is nothing if not complex, and undeniably personal. How young is too young to work full time? The answer might depend on the country you live in. What product scarcity issues are dominating the news cycle? A reporter in New York City is going to tell a very different story than someone in Syria. How are we to mathematically normalize these varying opinions? And who is to say that one culture’s norms are inherently better or more moral than another?

It's possible for algorithms to process vast amounts of data and sift out nuanced biases only visible at scale, but that level of transparency creates a more complex web of morality to unravel. The deeper we look into ESG factors, the more we realize just how many things impact a company. That is why we are trying to take transparency to the next level — the more we know the more informed our decisions are and the better choices we make. It might never be possible to fully normalize all the qualitative issues defining the space, but that just means we are the latest generation in a long line of philosophers seeking to quantify "good." I believe we are up for the challenge.

1. Leetaru, K. (2016, July 31). Can Algorithms Save Us From Bias? Retrieved October 03, 2017, from https://www.forbes.com/sites/kalevleetaru/2016/07/31/can-algorithms-save-us-from-bias/#3e42dd246a94

CORP-3326

Topics: ESG


Mark McDivitt | State Street Corporation

Mark McDivitt is the Head of ESG Solutions at State Street Global Exchange and a member of the Executive Corporate Responsibility Committee at State Street Corporation.