Diana Adela Martin (Ethics SIG co-chair) It is generally acknowledged that the challenges we face…
In the ever-evolving landscape of education and technology, the spotlight has recently been seized by innovations in the world of Artificial Intelligence (AI). One particular manifestation of AI, transformer-based models like GPT-3, has been making waves not just within academic circles, but across governments, universities, and industries. What drives this hype, and is it all that it seems? Let’s delve into the phenomenon, understand the power of expectations, and explore the realm of visionary communication.
Technology visions have been a hot topic in Technology Assessment (TA) and social sciences since the early 2000s hype waves of “new and emerging technologies” (NEST), such as nanotechnology, synthetic biology, human enhancement, and artificial intelligence (Beckert & Suckert, 2021; Schneider et al., 2022). While some visions promised to solve all of humankind’s problems, others paint rather apocalyptic pictures, e.g., of a “Grey Goo” taking over and consuming all of Earth’s biomass. Influenced by the anti-GMOs and anti-nuclear movements of the time, when these science-fiction stories got popular, technology developers feared that the “real” potential and value of their work would not be fully exploited and a manifesting “nanophobia-phobia” (Rip 2006) brought speculative ethics and communication efforts to hyper-mode.
Key to understanding the power of expectations is their motivational function when actors consider the future as a consequence of their own actions and break out of their routines (Emirbayer & Mische, 1998). Undoubtedly, socio-economic status and the influence of family, friends, and peers shape educational aspirations and achievements (Bourdieu, 2002; Goyette & Xie, 1999; Sewell et al., 1969). Nevertheless, the realm of imagination provides the space where people remix their experiences, life paths, and understanding of their social and technical environment into new narratives that bear meaning and guide the present actions of individuals and organizations into the future. In technology governance, these expectations play a crucial role in mobilizing resources and mediating across disciplinary boundaries or different levels of organization (Borup et al., 2006, p. 286): “What starts as an option can be labeled a technical promise, and may subsequently function as a requirement to be achieved, and a necessity for technologists to work on, and for others to support” (Lente & Rip, 1998, p. 216).
The performative power of expectations and the deliberate intention to influence how others imagine the future is probably most visible in the stock market’s response to company earnings, climate summits, and the highly orchestrated tech demonstrations, such as the cyber truck or iPhone release (Beckert, 2016; Sharma & Grant, 2011). At these “sites of hyperprojectivity” (Mische, 2014), stakeholders set up their stage and use data, simulations, prototypes, and even their costumes as props to compete for attention and make believe a convincing story about how the future might unfold if this or that action is missed or taken (Roßmann, 2021).
Another prime example is the breakthrough transformer architecture that laid the foundation for language models like GPT-3. While already introduced five years ago, the real surge in AI excitement only emerged with the release of OpenAI’s ChatGPT around November 2022, marking a breakthrough of what I would call “performative AI”: After its limited applicability to broadcasting human vs. AI chess, Go, or Jeopardy! tournaments, suddenly, everyone, not just experts, could engage with these apps and get a hands-on glimpse into the AI future. I really like to emphasize the performative use of machine learning applications as props and playgrounds to hype, and “accelerate” particular AI stories. In the case of ChatGPT, it is the clever simulation of human-like interaction – one word at a time – paired with stories of affection for AI and chilling doomsday scenarios that draws attention and fuels the “imaginative illusion” (Kind, 2016) of a general purpose AI and anthropomorphic AI. These narratives, however intriguing, convey a misconception about existing and available ML applications, put the wrong topics on our agendas, and tend to divert our focus from the socio-ecological toll of model training and the unchecked power held by platform owners like Google, Apple, Amazon, Facebook, Microsoft, and Nvidia. These tech titans unilaterally dictate the rules of internal markets in app stores, shaping the course of available machine learning applications for individuals, industry, research, and education. In this landscape, Nvidia’s CUDA API still holds a quasi-monopoly for machine learning, underscoring their dominance.
Indeed, fascinating stories disseminate and reveal hidden hopes and cultural values that are, therefore, worth reflecting on in discussions about the social use of technology (Grunwald, 2020). But paying particular charismatic “visioneers” and only their stories too much attention – also by speculative ethics – can inadvertently foster “tunnel visions”, ignoring alternative scenarios or unintended and neglected consequences (Intemann, 2020; McCray, 2013; Nordmann, 2007, p. 200). At times, these visions obscure “uncomfortable knowledge”, impeding constructive and inclusive dialogue about doubts and concerns and even stalling scientific error correction (Rayner, 2012). It is therefore advisable to not ignore but get aware of popular tech stories and re-mix or modulate them with an eye to their practical and political consequences, i.e., to responsively guide attention and allow learning in strategic scenario development or science communication and to deliberately motivate structural transformations (Schneider et al., 2022).
This is why I am currently studying the Dutch universities’ response to the ChatGPT hype in education. EdTech has seen earlier exaggerated alarms and hypes, like with Wikipedia and Massive open online courses (MOOCs) posing a threat with plagiarism, unreliable sources, cheating students, degenerating skills, or calling face-to-face universities obsolete. Now, with ChatGPT, my colleague Aodhán Kelly and I wonder what worries and hopes would recur, emerge, and dominate the debate and what actions universities would take to respond to and navigate in this sea of uncertainty. With this research, we hope to foster exchange between researchers, technology developers, educators, and policymakers and pave the way for a more conscientious and democratic shaping of our future. The engineering ethics education community could take a leading role in being critical to these stories. Teachers could bring this topic in as a discussion with their students. Universities can unite to set standards for what to use and what to explicitly refuse to use. If you are further interested in this topic, feel free to contact me or check out our Peeriodical on hypes and overpromising and our special issue which will be published in December 2023.