When Getting AI Ethics Right Becomes a Matter of Life and Death
On Dec 4th, 2024, Brian Thompson was killed for being the CEO of UnitedHealthcare (UHC), a “symbolic takedown” for the company’s perceived corruption (according to the killer’s manifesto). Today, 6 days later, “UnitedHealth Group has lost $45 billion in value.” David Goldman, CNN
Last year, UnitedHealth was sued in a class-action for using an “algorithm to make health-care determinations, leading to the premature and bad-faith discontinuation of payment for healthcare services...This suit potentially involves thousands of individuals and billions of dollars of damages, according to a lawyer for the plaintiffs.” Douglas B Laney, Forbes.
And “several years ago, government investigators found that UnitedHealth had used algorithms to identify mental-health-care providers who they believed were treating patients too often; these identified therapists would typically receive a call from a company “care advocate” who would question them and then cut off reimbursements. Though some states have ruled this practice illegal, it remains in play across the country.”
Jia Tolentino, “A Man Was Murdered in Cold Blood and You’re Laughing?” The New Yorker
This would leave some to consider UnitedHealth a bad actor, using algorithms for unjust reasons. Is all this news just bad luck for UnitedHealth, a cost of doing business (even if an extreme case)? After all, UHC has a “Responsible Use of AI Guiding Principles” and an Advisory Board.
A different example: could something like this happen to, say, the pharmaceutical giant, Merck? Though Merck also has a Code of Digital Ethics, and a Digital Ethics Advisory Board (of which AI Ethics is a subset), it also has a Director of Digital Ethics and Bioethics, Jean-Enno Chaton, who believes:
Digital ethics, data ethics, and AI ethics for me is the responsible handling of algorithms and AI. So, how can we responsibly use and deploy this technology in the world so that key ethical principles are safeguarded, so that we create trust within our solutions, that maybe trust from partners, trust from even our own employees, trust from customers, that we offer solutions, that they trust us to buy solutions or work with us.
“Operationalizing AI Ethics”, Kevin Werbach’s The Road to Accountable AI podcast
That explanation and Code of Digital Ethics suggest Merck’s culture has embraced AI ethics and the guiding values underpinning them. Would Merck’s approach to Ethical AI have helped UHC steer clear of the pitfalls and tragedies it has suffered?
The principles included, explained, and supported in Merck’s Code of Digital Ethics – Autonomy, Transparency, Non-Maleficence, Beneficence and Justice -- feel human and actionable. They read like a filter real people can apply to ideas and work, without uncertainty or explanation. For example, how to spot injustice:
An injustice can occur when a person is denied his or her entitlement to a benefit without good reason or when a disproportionate burden is imposed on him or her. In the handling of data and/or algorithmic systems, for example, there is a risk that certain people will have unequal opportunity to benefit from digital solutions or will be structurally discriminated against in the collection/usage of data.
So the issue is not whether you can point to a Code of AI Ethics or organize an Advisory Board. It is the process and culture you champion, and the values you elevate to be filters that management, employees, vendors – and algorithms – are expected to live and grow by.