Science and technology have gone hand in hand since the existence of humanity, not just for humans, but for all intelligent creatures with two main characteristics: observation/learning competence and the creation of artifacts that can be utilized for their designed purposes. The accumulated knowledge defines the epistemic realm of the subject matter, enabling knowledge transfer not only from one person to another but also across generations. The industrial revolution drew a clear line through history with the steam engine, electricity, mass production, computers, automation, and, more recently, Industry 4.0, which encompasses digital data and transformation, IoT, cyberspace, virtual environments, and, of course, artificial intelligence.
The history of computation aligns with the basic concept of implementing a set of instructions to achieve an expected goal (i.e., an algorithm). However, the recipe for a desirable action was not directed to a human making a meal but to electrons [1], thanks to the mediation of mathematics, binary numeric bases, logic theory, calculus, and more, and hardware innovation (silicon semiconductors, circuit switches, gate switches, CPUs, etc.). This represents the classical case of how theoretical knowledge can be materialized into technology.
Probably it's the best machine in the world
Returning to artifacts, their utilization has proven flexible enough to cross the boundaries of their designated goals. A knife, for instance, can cut materials but can also cause injury. There is no morality inherent in the artifact itself; morality is defined by our norms and beliefs, both as individuals and as members of society. The responsibility for usage lies solely with the user, provided the artifact was well-designed to fulfill its original purpose. Otherwise, responsibility lies with the designer/manufacturer.
Can we draw clear boundaries between the user and the manufacturer regarding responsibility? I would argue yes, for practical purposes. As long as the intended purpose was well-defined, deviation from that purpose falls to the designer/manufacturer. If the actual use differs from the designated purpose, responsibility lies with the user. This framework applies to any technological artifact based on a set of instructions for building and using the tool, what we call an algorithm.
So what makes artificial intelligence different from any other artifact? Can we simply impose the same assumptions on AI systems?
To answer this question, we must first define AI systems and then examine them against the criteria mentioned above. Indeed, AI systems comprise algorithms, which are sets of actions designed to achieve a specific result. But what else qualifies a system as intelligent rather than merely a collection of computational capabilities? Here, we can return to Alan Turing's experiment, the Turing Test [2], in which a human cannot, through conversation, distinguish whether they are interacting with a machine or another human.
This criterion, by itself, is not relevant to the responsibility question, since intelligent systems can be highly complex computational systems that convince human observers they are human. If we can follow the algorithm, we can understand its purpose, and the responsibility question can be resolved accordingly.
However, this is not the case with modern AI systems: today's AI systems are not deterministic at all. Traditional computational systems use procedural languages for defining algorithms (such as Assembly, C, Java, and their descendants) or declarative ones (such as Prolog, Lisp, and their descendants), like the OSCAR project [3] and Alaiza [4], which use pre-defined concepts to mimic human behavior.
The new AI systems use probabilistic algorithms to identify intention and infer purpose, even without explicitly knowing the purpose. The first stage was classification models in machine learning - a milestone in the artificial intelligence field.
Once classification was applied to natural language, the "magic" began to happen. The systems sound very "human" and identify with high accuracy what we are looking for. Behind the scenes, neural network mechanisms encode and decode information. Unlike the deterministic systems mentioned above, this probabilistic algorithm, combined with neural network distribution, remains opaque to our observation and understanding of how it works. Moreover, the system's output, though it may look and feel intuitively correct, is not necessarily accurate.
The disclaimer remediation
Thus, we encounter the first problematic issue: transparency. If we do not understand the system, how can we correctly define the responsibility between users and designers, especially when the design appears to never end? The system operates autonomously in classification and inference according to probabilistic methods. Who can we blame when the coin falls on heads or tails?
The second issue concerns the system's fairness [5]. When creating an AI model, it must be trained on massive amounts of data, which inevitably come with bias.
Bias can be created and observed at many stages throughout the creation and utilization of the model [6]. Can we rely on systems that contain biases? Even when there is a responsibility to align data to avoid bias, under whose responsibility does it fall to correct drifts that the model might generate? Another fundamental question arises: what do we consider a good system—one that reflects biases based on genuine training data (acknowledging that any collected data is biased given the subjectivity of the collector), or a system that artificially corrects information for the sake of equality and equity [7]?
The third aspect of AI systems is the responsibility gap.
In 2004, philosopher of technology Andreas Matthias introduced what he called the problem of a "responsibility gap" with "learning automata." In essence, intelligent systems equipped with the ability to learn from interactions with other agents and the environment make human control over and prediction of their behavior very difficult, if not impossible. However, human responsibility requires knowledge and control. Therefore, humanity faces a dilemma: either we continue with the design and use of learning systems, thereby relinquishing the possibility of human persons being responsible for their behavior, or we preserve human responsibility and thereby abandon the introduction of learning systems into society.
The fourth aspect concerns the societal context - policy and legislation. As explained above, the opacity of AI systems based on Large Language Models creates inevitable misalignment with any policy and guidance implemented through reinforcement and fine-tuning learning. If AI systems lack coherence, what can they reveal about ethical and moral questions that are perceived differently across cultures and societies? The expressions of universalism and pluralism [8] in these systems are not subject to design criteria, as these systems are already deployed in public spaces. It appears we are left to embrace what seems to be the right tool, accompanied by disclaimers such as "ChatGPT can make mistakes. Check important info."
A glimpse into the future
However, there is hope on the horizon. Given the overwhelming implications of these AI tools - encompassing not just text but also speech, images, and videos (i.e., multimodality) - an avalanche of regulations is targeting AI systems across five main pillars of discussion [9]: responsibility and accountability, justice and fairness, transparency, privacy, and non-maleficence. Regulatory bodies have recently issued guidelines that establish expectations for these systems and their users. Acts such as NIST, the EU AI Act, OECD, UNESCO, and many other regulatory bodies around the world continue to publish and update guidelines for AI systems. The main topics under regulation include system classification, stakeholder accountability, policy enforcement, and suspension.
This represents a formidable challenge, as private companies (AI giants) are releasing AI systems rapidly while lacking clarity on limitations and caveats.
I will conclude with a conceptual perspective on possession. Knowing that all these models are trained on the entirety of human knowledge, would it be reasonable to claim that the outcomes belong to society as well (with fair deduction for the resources invested in creating these models)? The unbalanced distribution of digital resources may indicate an emerging political crisis, taking the shape of "power to the people," which advocates for a better and fairer society.
References
1. J. Bardeen & W. H. Brattain — “The Transistor, A Semi-Conductor Triode.” Physical Review, Vol. 74, p.230 (15 July 1948). DOI: 10.1103/PhysRev.74.230.
2. Turing. (1950). Turing, A., 1950, Computing machinery and intelligence. Mind, 59, 433±460C
3. Pollock, John (1998). Procedural epistemology. In Terrell Ward Bynum & James H. Moor, The Digital Phoenix: How Computers are Changing Philosophy. Cambridge: Blackwell. pp. 17.E
4. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.
5. Ferrara, E. Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci 2024, 6, 3. https://doi.org/10.3390/sci6010003
6. Suresh, H., & Guttag, J. (2021, October). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-9).I
7. Capraro, V., Lentsch, A., Acemoglu, D., Akgün, S., Akhmedova, A., Bilancini, E., Bonnefon, J., Brañas-Garza, P., Butera, L., Douglas, K.M., Everett, J.A., Gigerenzer, G., Greenhow, C., Hashimoto, D.A., Holt‐Lunstad, J., Jetten, J., Johnson, S., Longoni, C., Lunn, P., Natale, S., Rahwan, I., Selwyn, N., Singh, V., Suri, S., Sutcliffe, J., Tomlinson, J., Linden, S.V., Lange, P.A., Wall, F., Bavel, J.J., & Viale, R. (2023). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. ArXiv, abs/2401.05377.
8. Rudschies, C. et al. 2021. Value Pluralism in the AI Ethics Debate – Different Actors, Different Priorities. The International Review of Information Ethics. 29, (Mar. 2021). DOI:https://doi.org/10.29173/irie419.
9. Jobin, A., Ienca, M., & Vayena, E. (2019b). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Author Bio
Yuval Temam is a PhD candidate at Radboud University. His research centers on the ethical validation of AI entities, with particular emphasis on the philosophical foundations of ethics, epistemology, and ontology. It explores the societal implications of these philosophical domains, alongside the challenges posed by the evolving landscape of AI technology in the pursuit of ethical validation. Through this interdisciplinary approach, the study aims to address the complex intersection of AI and ethics in contemporary society.