Diana Adela Martin (Ethics SIG co-chair) It is generally acknowledged that the challenges we face…
Another text on Artificial Intelligence, another opinion about the quicksand of today’s technological world. The saturation we have reached in this vast subject, although we have by no means exhausted the analysis of the issues and consequences−some accomplished, some possible−, indicates an inflation and a dissipation of its moral and political thinking. When everyone participates or wants to participate in the same discussion, it could be a bad sign. Not only do we indirectly neglect other topics in the ethics and politics of technology, but there is also a risk of focusing on an illusion. A double illusion: that we have a strong understanding of what is happening and that we can influence it for the better, by reaching some moral directives. In fact, research progress in AI is fabulous, if we consider how long it took from the birth of this field to truly produce useful results, the use of AI systems or algorithms is happening almost everywhere, and media and social hype has been running high for several years.
However, something is amiss. By proceeding, in ethics, as in science, i.e., by specialising and pursuing a hypothesis to the hilt, applying one method or another metronomically, etc., we only miss what is vital and only solve one problem at a time. As if we were day labourers, working where we are called, never seeing the entire house, the street, the neighbourhood, the world as such, but certain fragments, limited areas that need fixing. But how can we be successful when we miss the whole context? And, after all, there is no Artificial Intelligence, but a multitude of attempts, methods, systems, results, hence effects, or consequences. What appears as a moral and social issue for Large Language Models might not be problems for an AI system that generates hypotheses in chemistry or one that detects bank fraud. And vice versa. Ethicists are in a more tangled situation in the case of AI than in the case of other technologies they assess. It is debatable whether it still makes sense to speak of AI technology as an external artifact or system, since the coupling of biological and social life to these artificial, imprecise but probabilistically correct “machines” seems irreversible.
During the CoMoRe project (2020-2022), alongside my colleagues Mihaela Constantinescu (the principal investigator), Cristina Voinea, and Radu Uszkai, we arrived at a negative conclusion by approaching the field in a unified way: even if codifiable principles of responsibility are possible, AI systems cannot be (made) accountable (and will not be anytime soon, no matter how efficient deep learning methods become, no matter how explainable the processes are, no matter how complex the neural networks are, etc.). Responsibility remains a purely human affair, one that we humans find very hard to achieve, and a-responsibility is, rather, the condition of Artificial Intelligence per se. Unwittingly, as they are devoid of genuine intentionality, AI systems in action are amoral. Delegating many human decisions to them, and decoupling ourselves from decision-making processes, is actually irresponsible. AI technologies’ total autonomy is not desirable, and what can help us avoid moral disasters or accidents, such as injustice (due to bias), doing harm, wrongdoing, hurting or exploiting, etc., is only the continuous choreography between human moral reflection and AI’s sorting and predictive capabilities in an ocean of data. For the human partner in this dance to know what they’re doing, to be a phronimos, to have practical wisdom, requires the cultivation of virtues, not just dianoetic ones but especially moral ones.
AI creators can understand what is right to do and how evil can be avoided, even without an explicit system of rules (sometimes even against it when it is the result of diplomatic negotiations, not of moral deliberation), provided they cultivate the excellence of their own character. Having reached this point, we have neither solved nor dissolved the problem. We just shifted the focus to other questions: what should the working and living environment of AI system designers look like, how should we frame and integrate the use of these systems into everyday life, making users understand the coupling to these “machines,” what are the human models that help one become a phronimos, how to enhance and cultivate moral virtues based on intellectual ones in a balanced way, how to be truly morally autonomous as an engineer, etc.? The context and pace of AI development, conditioned by the economic race in transnational capitalism with huge financial and political stakes, are not conducive to addressing these questions. Moreover, as can be seen online and in traditional media, like television, an evangelizing and salvationist discourse about AI still predominates. However, there is hope: the future lords of the AI rings should undergo moral education during their university studies. Hence, the importance and urgency of AI ethics and political philosophy not only in engineering studies but in every school, even from a young age. Finally, in order not to be too pessimistic or even Luddites, we have argued that moral pedagogy and the exercise of virtues can be also achieved through the practice of discovery and innovation in robotics, especially in childhood.