Diana Adela Martin (Ethics SIG co-chair) It is generally acknowledged that the challenges we face…
Dear reader,
Fall 2023. What an exciting time to be in engineering education!
We all have been preaching our entire careers, be it months or decades, that engineering students should be scaffolded to develop themselves as more responsible engineers. We, in ethics, took our time at the comfortable sidelines giving intelligent ethical comments to NASA’s Challenger engineers, Volkswagen software developers, or our local ecosystem partners. And suddenly, we find ourselves in the middle of a global debate on generative AI and its widespread effect, also impacting our daily lives in engineering ethics education.
It seems that nobody in the entire world really knows what to do. In this newsletter, including this introduction, we get this state-of-the-art: exploratory and cautious philosophical analyses of how ambiguity, global justice, virtue theories, integrity, narratives, plausible nonsense, and engineering responsibility can add to the debate. Some contributions go a step further and already reflect on the impact on educational aspects such as assessment.
But the debate is still very open! How can we, as engineering ethics educators and researchers, and this time from within, co-generate morality and influence innovations for the better? How can we influence our universities and their ecosystems to be socially responsible? How will we generate and redesign engineering ethics education so that this super-fast developing technology of generative AI is a constructive tool instead of a threat for students to learn important things, such as being critical and designing constructive values in this very AI innovations? And more and more, student involvement will not be an exotic choice of a few enthusiasts, but a necessary part of a process to involve how they see learning and use their technical skills to answer the ever-changing situation.
Fall 2023. Exciting times indeed! We hope that the following contributions might stimulate you to contribute to the global challenge of generative AI in engineering ethics education.
Andrew Katz (Virginia Tech, USA), in Generative AI and the role of uncertainty in classroom assessment, states that faculty members must consider what they try to achieve through assessment. Engineering Ethics Education teachers and researchers should anticipate a non-trivial impact, especially as the models transition toward multimodality.
Vlasta Sikimić (TU Eindhoven, The Netherlands) explains in AI and education for global justice that AI can help disadvantaged populations with access to information and tools, but the monopolies of companies and moral and epistemic values have to be closely monitored. This can be done by revising existing standards, human-in-the-loop approaches, and constantly keeping in mind the needs of the users – students and teachers.
Constantin Vică (University of Bucharest, Romania), in The choreography of virtues for living with AI, argues that even if codifiable principles of responsibility are possible, AI systems cannot be made accountable, and will not be anytime soon, no matter how efficient deep learning methods become. They see hope when “the future lords of the AI rings should undergo moral education during their university studies.”
In Benchmarking AI tools and assessing integrity, Sasha Nikolic (University of Wollongong, Australia) and Scott Daniel (University of Technology Sydney, Australia) provide an extended list of guiding questions for further research to avoid many unintended consequences.
Maximilian Rossmann (Maastricht University, The Netherlands) in How to counter the tech titans’ futuristic narratives about AI? explains how transformative Vision Assessment can be extended with digital methods. The Engineering Ethics Education community could use this to take its societal role and contribute to counter-key-narratives on performative AI developments.
Cécile Hardebolle (EPFL, Switzerland) and Vivek Ramachandran (UCL, UK), in Discussing “plausible nonsense” and “carbon footprint” in engineering ethics, call for a critical examination of the use of AI tools in engineering classrooms, as it increases the understanding of how to use the tools optimally, but is at the same time an excellent topic for engineering ethics courses.
Mihály Héder (Budapest University of Technology and Economics, Hungary) gives a concrete example of how the above can be realised in The Human-Centered AI Masters programme. He describes how he and his colleagues designed a curriculum that incorporates humanities – especially ethics and law -, and social sciences into the technical education of Artificial Intelligence.
Nael Barakat (University of Texas, US) analyses the links with data collection, media presentation, and engineering responsibilities in his contribution Data is Never biased, and AI is Never Unethical.
To conclude, we want to draw attention to this call for contributions exploring the ethical implications of AI Hype, including the overinflation and misrepresentation of AI capabilities and performance.
Thank you once again for thinking on these topics alongside us!
Gunter Bombaerts & Diana Martin