How can Ethical, Legal, and Social Aspects (ELSA) approaches operationalise Responsible AI?
Artificial Intelligence (AI) is rapidly embedded in nearly every part of our lives. While this widespread integration offers immense opportunities, it also raises a wide range of ethical, legal, and social aspects (ELSA) or implications (ELSI) that require serious attention. Recent theoretical and empirical research has shown that these aspects often manifest at multiple, interconnected levels (Wang & Blok, 2025; Ryan et al., 2024; Bolte & van Wynsberghe, 2024) — from ethical and legal concerns at the level of newly designed artefacts, such as biased or discriminatory outcomes, infringements on privacy, and lack of transparency, to structural socio-political issues related to the power of big tech companies (Ryan et al. 2024) and ontological issues related to the identification of human and artificial intelligence and its transferability (Blok 2025; Ryan 2025).
Responsible AI, often used synonymously with the ethics of AI, is an emerging interdisciplinary field which responds to these complex challenges by aligning AI systems with human values and the well-being of our ecosystems (Dignum 2019; Stellinga et al. 2025). However, despite its growing influence, it is still unclear how to effectively operationalize Responsible AI in practice, especially in addressing multi-level ELSA aspects intertwined with power relations and structural dynamics.
The ELSA or ELSI research has recently gained renewed attention as a promising approach to advancing Responsible AI (Wang et al. 2025). Originating in the early 1990s, ELSA research began in genomics and later expanded to other emerging scientific and technological domains (Fisher, 2005). Its core focus has long been on integrating ethical, legal, and social reflection into scientific research and innovation practices (Zwart & Nelis, 2009). Over three decades, ELSA research has built a rich legacy of theoretical and empirical work, providing valuable insights for interdisciplinary collaboration, stakeholder engagement, and anticipatory governance that might directly inform the effective implementation of Responsible AI. At the same time, ELSA research itself is evolving in response to AI’s unique challenges. This new wave of ELSA research moves beyond the traditional science-embedding model toward more participatory engagement with a broader spectrum of stakeholders, including academia, industry, government, and civil society. To effectively address those complex ELSA challenges, various ELSA Labs have also been established to develop and test innovative, ELSA-specific methods, drawing on existing frameworks such as RRI, VSD, and Social Lab (Ryan & Blok, 2023; Wang et al., 2025). In this way, ELSA research serves both as a practical resource to inform the implementation of Responsible AI and as a dynamic, evolving research frontier that continuously refines its approaches to meet AI-specific challenges.
This Article Collection aims to explores how ELSA research provides empirical, methodological, and conceptual insights to guide the implementation of Responsible AI, while also showing how the field itself is evolving to address AI-specific challenges. We invite theoretical, methodological, and empirical contributions addressing the following subtopics:
- ELSA’s Empirical Legacy — What lessons and insights can be drawn from decades of ELSA research in genomics, nanotechnology, and other emerging technologies? Which proven methods for ethical reflection, stakeholder engagement, and anticipatory governance can be adapted for Responsible AI?
- ELSA as a Living Research Frontier — What exactly constitutes the ELSA methodology? How can methodological innovations address multi-level ELSA challenges in AI? How does ELSA relate to, and differ from, other frameworks such as RRI and Value Sensitive Design (VSD)? What are the key challenges, opportunities, and emerging research gaps for ELSA research?
- ELSA Approaches Operationalizing Responsible AI in Practice — Through case studies, what specific ethical, legal, and social aspects does AI raise? In particular domains, how can ELSA meaningfully engage Quadruple Helix stakeholders, navigate power dynamics, and inform real-world redesign or policymaking? What is the particular role of the ‘legal’ dimension in ELSA?
- Future Directions —What might the field’s trajectory look like in the coming years? How can ELSA research evolve to shape the responsible development of future AI systems?
Keywords: ELSA/ELSI, Responsible AI, Responsible Research and Innovation, AI Ethics, Quadruple Helix stakeholder
Manuscript Submissions:
Manuscript submission is open until 27th September 2026.
See the journal website for submission instructions.
Article collection guest advisor(s)
Dr. Hao Wang, Wageningen University & Research (The Netherlands)
Hao2.wang@wur.nl
Dr. Mark Ryan, Wageningen University & Research (The Netherlands)
mark.ryan@wur.nl
Prof. Vincent Blok, Erasmus University (The Netherlands)
blok@esphil.eur.nl