AI Challenges a Specific Blind Spot
Due to the rapid pace of AI development, its cognitive, psychological, and social impacts are becoming quickly visible. Those impacts were partly induced by other precursor technologies, too, but they were not as visible as now in the case of GenAI. With GenAI, something changed in people’s reflections upon technology – not everywhere and not fundamentally, but the human-technology relation seems to have suffered a breach of trust.
For a long time, dealing with technology has meant dealing with a blind spot regarding some of its sociotechnical consequences. I am not referring to high-scale tech projects, which were controversial regarding obvious risks from their very beginning, such as nuclear power plants for energy supply or nuclear weapons for arming. I am talking about technologies with a more “neutral” appearance, which were invented, innovated and slightly distributed to the market, until they were part of almost every household in the Global North: like the internet, smartphones, and now LLMs as used for chatbots.
With the development and distribution of these technologies, so-called hard impacts have been at the forefront of debate. These are the obvious, surface-level consequences of innovations. Hard impacts are quantifiable, mostly uncontroversial, and clearly assumed results of technological innovations and dissemination. Examples include effects on health (improvement vs. harm), employment (more vs. fewer jobs), and profit (gain vs. loss). These consequences can be expressed numerically and are ideally suited for policy- and business-oriented risk analyses and, ultimately, for decision-making. It is usually this dimension of hard impacts that captures media attention and drives societal discussions about the consequences of technology, as it is undeniably based on shared moral values that are confirmed, disrupted, or challenged by technological outcomes. But experts in the professional discipline of technology assessment have long emphasized that determining the consequences of technological development is more complex than it may seem (Grunwald 2024). There are at least three key reasons for this.
The Hard-to-Estimate Consequences of Technology Development
- The persistent risk of dual-use, where technologies can have both beneficial and harmful applications.
- The necessarily selective nature of considering only some possible consequences.
- A time lag before the full visibility of impacts emerges, making early assessment difficult.
I want to briefly describe these three problem areas and then explain what the blessing in disguise is, from a critical perspective, in the case of GenAI.
- Dual-use refers to the phenomenon that technologies can be used for purposes other than those intended by developers, vendors, and regulations (Hähnel 2024). These are not simply byproducts or side effects, such as additional costs or waste. Rather, it describes unintended consequences arising from the technological outcome of developers, vendors, and regulations involved in the planning process. These consequences become apparent when certain users deliberately instrumentalize these technologies for their own purposes. In the context of the internet, this includes cybercriminal activities and disinformation. The internet enables these activities on such a scale, regardless of what developers and businesses intended, and despite how regulations evolve and attempt to prevent illegal activities.
- Many consequences cannot be precisely determined or proven using conventional scientific methods. The effects of technologies can be assessed not only in terms of the hard impacts described above but also in terms of the so-called soft impacts (Swierstra 2013). Soft impacts are not quantifiable and can only be examined along qualitative criteria. The moral evaluation of possible soft impacts goes beyond a simple distinction between "right/good" and "wrong/bad". Take smartphones as an example: their effects cannot be comprehensively assessed on a purely hard-impact level. Cellphone masts are harmless to health, and smartphone use generally does not endanger human life; at least, the potential benefits outweigh the risks. However, this does not account for the disruptive social and psychological consequences partly caused by the widespread use of smartphones. Cultural studies have shown how smartphones have changed not only socioeconomics and technopolitics but also social behavior and technomoralities. Smartphones have contributed to changes in communication behavior (speed, availability, simultaneity) whose moral evaluation is not straightforward and heavily depends on the individual user and the context of use. Attitudes toward privacy have certainly shifted due to widespread smartphone distribution, as have attitudes toward the visibility and acceptance of this technological artifact, expectations regarding device functionality, and so on. This demonstrates how any analysis focusing solely on hard impacts remains limited and selective, and how difficult it simultaneously is to include soft impacts. At least for pragmatic reasons, potential soft impacts are rarely considered in decision-making processes, as they "complicate” technological development and do not facilitate rapid decisions in a highly competitive and fast-paced world.
- Finally, we face the fundamental Collingridge Dilemma (Collingridge 1981), which states that the likely consequences of innovations can only be reliably determined when sufficient data is available. However, such data becomes comprehensively available only after the technologies to be evaluated have been widely used by many people. However, once the technology is already developed and widely distributed, it becomes challenging to "take back" these technologies if their impact proves harmful based on the collected data. The Collingridge Dilemma is particularly applicable to soft impacts, where data collection would need to be far more extensive, and where it remains debatable which types of data are most suitable. Even for assessing impacts that are actually measurable (hard impacts), difficulties arise: For internet-enabled smartphones and social media platforms, the long-term psychological consequences during their initial distribution could not be determined at the hard-impact level. Only now, with almost all young people in the Global North having access to screens and the internet, are psychological studies available on attention span, concentration, and addiction in relation to their media environment.
AI Is the Strongest Reason for Critical Reflection on the Role of Technology
So where exactly is the blessing now? Doesn't GenAI exacerbate the dual-use problem and the long-term technology assessment? Yes, it does. But because it is developing so rapidly, it also makes actual and potential social consequences faster visible – almost like watching things in time lapse! This is probably why so many critical voices have never before gathered around a technology that appears "neutral" at first glance, as GenAI and especially chatbots do. The developments are so drastic, and the news and scandals unfold so quickly, that the dual-use problem becomes obvious (1). On the hard-impact level, for example, we can observe that more jobs are being lost than created with GenAI; and on the soft-impact level, we can legitimately assume that, for instance, independent thinking and writing in schools and universities are becoming less apparent rather than more (2). The long-term consequences can be anticipated even without comprehensive studies, based on previous experiences with less harmful technologies (such as calculators, computers), including deskilling and dependency for fundamental mindful activities (3). Therefore, I conclude that, paradoxically, AI is currently helping us to critically examine the role of technology for humanity. What has been neglected in terms of reflection over the last few decades for precursor technologies is now finally regarded as a critical issue.
References
- Collingridge, David (1981): The social control of technology. London: Palgrave.
- Grunwald, Armin (2024): Handbook of Technology Assessment. Cheltenham: Edward Elgar.
- Hähnel, Martin (2024): Conceptualizing dual use: A multidimensional approach. In: Research Ethics: 21: 2, 205-227.
- Swierstra, Tsjalling (2013): Nanotechnology and Technomoral Change. In: Etica & Politica: 15: 1, 200-219.
Author Bio
Dr. Sercan Sever is a postdoctoral researcher at the University of Tübingen and conducts research as a fellow at the TEIFUN College, which focuses on “Artificial Intelligence and Education.”
