While we rejoice in the breakthroughs of AI technologies and celebrate the easy access to new AI toys ranging from text creation to video making, we are more than ever involved in the ethical and moral problems related to the widespread use of AI.

It's publicly known that in order to be able to create an image of "an astronaut riding a horse" (and basically any other silly idea we have in mind), the AI model called Stable Diffusion has scraped billions of images, including copyright protected ones, from the Internet. While in the field of image generation AIs, the main ethical topic revolves around the fair usage of the original artwork, the other AI fields wrestle with their own ethical problems.

In general, we can talk about the moral and ethical problems related to the usage of AIs and the moral and ethical issues surrounding the decisions AIs make. I've been on a quest to sort out those AI ethics related topics for myself for a while when I stumbled upon an excellent article "Tragic Choices and the Virtue of Techno-Responsibility Gaps" by John Danaher from the University of Galway, exploring the extent we can trust decision making machines (aka AIs) to make hard decisions instead of ourselves.

3 Sins of AI and Techno-responsibility Gap

Danaher starts off by listing 3 groups of ethical concerns raised by autonomous machines (AI and robotics): fairness and bias; transparency and explainability; and accountability and responsibility. His focus in this article is on the latter—the responsibility of autonomous machines.

The more we use and rely on autonomous machines, the more we need to deal with the "responsibility gaps". What are we talking about here? Danaher explains the concept of the techno-responsibility gap to us as follows:

 

Techno-Responsibility Gap Concern: As machines grow in their autonomous power (i.e. their ability to do things independently of human control or direction), they are likely to be causally responsible for positive and negative outcomes in the world. However, due to their properties, these machines cannot, or will not, be morally or legally responsible for these outcomes. This gives rise to a potential responsibility gap: where once it may have been possible to attribute these outcomes to a responsible agent, it no longer will be.

 

To illustrate the problem to the people who happily type in another prompt to the machine to get the next colorful image, the techno-responsibility gap means that, there's no one to blame if the machines churn out an image that perhaps too closely resembles some copyrighted works, because it may not have been the prompter's intention at all, nor the creators of the AI.

Of course, people are not happy with the disappearance of responsibility in society and are trying to solve it as best they can. Recently, we all learned about the possible lawsuit against the Github Copilot, which produced a code that was eerily similar to copyrighted code. On the other hand, Danaher believes that we should accept such responsibility gaps as we may benefit from them in certain cases. Danaher argues that responsibility is not always a good thing and that techno-responsibility gaps should not always be plugged or dissolved.

When does responsibility do more harm than good?

According to Danaher, we need to understand that some moral problems presented to humans are not solvable without significant cost to our psyche, leaving a moral "taint or remainder". However good our intentions are, we may end up in a moral conflict where there are no winners in the end.

Danaher defines a tragic choice as

a moral conflict in which two or more moral obligations or values compete with one another in such a way that they cannot be resolved or reconciled through decision-making. One of the obligations or values must be traded off against, or sacrificed in favour of, the other. This leads to a moral ‘taint’ or stain on our decision-making and makes moral decision-making a fraught and difficult business.

 

Danaher also claims that these moral conflicts, are not rare at all. On the contrary, we stumble upon them quite often. And since they are so common, they present a significant cost to our society.

How do we deal with tragic choices?

Nobody wants to live with the constant psychological pain that "tragic choices" cause us. So we engage in different strategies to cope with them. Danaher, names three of them: illusionism, responsibilisation, and delegation.

The Illusionism as a strategy allows us to convince ourselves that our choices are not tragic or that our decisions leave no moral remainders.

To illustrate the concept of illusionism, I will give a personal example.

I have twins and once they were toddlers, one of the typical tragic choices I had to make quite often was to decide who got the toy they both wanted at the same time. I think every parent with more than one kid knows what I'm talking about.

According to the illusionism strategy, I convince myself that it is of no consequence if I simply give a toy to one of the kids, as in the "large scheme of things" it does not matter, so I should not be bothered too much if one kid starts crying because of my arbitrary choice.

The responsibilisation strategy means that while the choice is tragic, I will not postpone the decision but take full responsibility and, of course, it's full cost. It's the heaviest strategy for the decision maker. In the case of the twins above, I will give the toy to one of the kids, or the fight will never end. I know that the other one will start crying and blaming me for the unjust decision, but I'm willing to take the decision and live with the blame.

The final option is to delegate the decision making and relieve myself from the consequences. In the case of twins fighting for the same toy, I may suggest they turn to mommy and ask who gets the toy, or simply agree to flip the coin and see who the lady of luck prefers. As Danagher points out, the cost of the tragic choice does not vanish, but is delegated to some other agent. While it saves some "emotional damage" to ourselves, the downside is that our moral "muscles" may atrophy. Furthermore, if it becomes so simple, we may leave all the decisions to others, not just the painful ones.

 

Will decision making machines save us from moral guilt?

Now you probably see where this argument is going. Yes, there are machines capable of operating with a wider range of data points than any human single handedly can. They are capable of recognizing some patterns that we fail to see. So, to save ourselves from tragic choices, why not delegate the decision making to such machines? Hopefully, they do not feel the same distress about proposing solutions as we humans do.

The only problem is that, in order to do so, we need to admit that responsibility gaps are acceptable in society as such decision making machines cannot be held responsible for the decisions they make. Nor should the responsibility be extended to the creators of such machines.

Sounds great? fewer problems with "tragic choices" and more fun in life.

Not so fast, as Danaher offers a few considerations before we go all gung-ho about letting AIs make life-changing decisions. 

First, not all decisions are "tragic choices" by definition, meaning that there is a potential solution which involves a clear moral choice without negative costs.

Secondly, we should consider whether the decision making system is actually fit for decision making.

Third, while this approach reduces costs, it does not mean they are cost-free (remember also the atrophy of your moral "muscles").

And finally, we cannot take it for granted (in the future) that the decision making machines won't suffer during these decision making processes.

 

Also, there are notable criticisms of leaving machines to make the hard choices for us.

  • The first group of criticism argues that we do not need AIs to get rid of guilt. A simple coin flip will suffice.
  • The second party claims that you won't dodge the bullet anyway, because by making the decision to delegate, you are taking the responsibility for whatever comes out of it.
  • Thirdly, this kind of delegation leads to "agency-laundering" or liability evasion.
  • And finally, delegating decision making to machines does not reduce the burden at all, but makes it even more visible and thus harder to bear.


All in all, Danaher seems to reach the cautious conclusion that while each of the criticisms can be addressed one way or another, it only leads to a very narrow path for the actual possibility of using machines to relieve us from the burdens of "tragic choices."

 

***

When I think back to the early days of the COVID-19 outbreak and try to put myself in the shoes of the doctors who tried to make the decision of which patient to leave without the oxygen machine, I could relate to the notion that perhaps it would have eased the psychological trauma each such decision cost if an AI had made the choice and not me.

However, I'm terrified at the thought of what kind of Pandora box that is. 

Are these questions relevant today, when we use our new generative AI tools without a second thought? I believe they are; we are leaving a whole lot of decisions to machines, as the results thrown out by machines are "good enough". Surely, we have not yet reached "tragic choices" when playing with Stable Diffusion or using GPT-3, but perhaps that is just a beginning in accepting the responsibility gaps.

***

Read full article by John Danaher from here.