Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

© David Gyung - https://www.istockphoto.com/nl/legal/license-agreement?utm_medium=organic&utm_source=google&utm_campaign=iptcurl

Automating Autonomy: for whom and by whom?

09/05/2025

Guest Authors: Udipta Boro, Ariane Lucchini, Catalina Lagos

The increasing role of technology and automation in our lives raises the question to what extent they infringe on human autonomy, and under what conditions such interference could be justified. Before evaluating the impacts of technologies on human autonomy, it is important to define what we mean by autonomy. In this context, autonomy refers to the capacity of individuals to make meaningful choices that reflect their own values, beliefs, and preferences (Christman, 2020). It is not merely the absence of external constraints; rather, it encompasses both the internal abilities (such as self-reflection) and the external conditions that allow people to determine their own course.

In this blog, we will build on Mackenzie’s (2014) multidimensional account of autonomy to argue that autonomy is not a one-dimensional quality that technology either fully supports or entirely negates. Rather, it is a context-dependent and multifaceted principle that can be both fulfilled and infringed upon in different ways, for different people, and to varying degrees.

Mackenzie makes a distinction between three dimensions of autonomy. The first dimension, self-governance, involves the ability to make choices that are in line with one’s internal commitments, values, and beliefs. According to Mackenzie, self-governance has an important relational dimension because people find out who they are and what they want through interacting with others. The second dimension, self-determination, involves external conditions that enable someone to make their own choices. This includes freedom conditions, such as political and personal liberties, and opportunity conditions, which are opportunities needed to make one’s own choices. The third dimension, self-authorization, involves regarding oneself as having the normative authority to be self-governing and self-determining. This involves accountability, self-evaluative attitudes, and social recognition. The latter has a clear relational dimension, since misrecognition may reduce one’s belief in one’s own authority to be self-determining and self-governing (Mackenzie 2014).

We will explore these dimensions by examining case studies from different fields, in which technologies undermine some of the dimensions. In doing so, our discussion will reveal that the impact of technology on human autonomy emerges not solely from the technology itself, but from the complex interactions between technical systems and the contexts in which they operate.

Automated Surveillance and Bodily Autonomy through the lens of self-governance

Let’s discuss first how automated urban surveillance technologies are restricting self-governance and thus infringing upon bodily autonomy, especially of women. Bodily autonomy here is understood as one’s ability to self-govern their own bodies. Using automated surveillance technologies, many cities are aiming to change traffic behavior to make traveling safer, while others are aiming for increased overall security. Both cases are an attempt to improve city life by promoting autonomous movement of people through increased safety and better traffic management.

However, AI-powered cameras are also being used to enforce more control over people’s quotidian lives and perpetuate the values of dominant groups. For example, in Iran, traffic camera footage is used to enforce mandatory hijab rules against women drivers (Akbari, 2019). Women caught driving without “proper veiling” had to face repercussions, including denied entry into public spaces (Akbari, 2019). In India, cameras implemented for supposed women's safety are also used to enforce normative patriarchal expectations related to women’s clothing or their use of public spaces (Rathi & Tandon, 2020). Not only are these issues limiting women’s free participation in public spaces, but they are also restricting women’s capacity to self-govern their bodies, as evidenced by the legal prosecutions of those who deviate from the norm (Akbari, 2019; Rathi & Tandon, 2020).

As Mackenzie shows,self-governance revolves around the internal dimension of autonomy which includes competence and authenticity to be one’s true self. The aforementioned examples show that technologies, even when built with positive intentions (such as to manage traffic and increase safety), can violate these principles of autonomy. In our case, the use of cameras to enforce mandatory rules prevents women from deciding for themselves how to cover their bodies. Furthermore, these technologies, by perpetuating patriarchal norms, can also enhance the self-governance of men in privileged positions.

Here, we have argued that automated technologies alter the internal dimension of autonomy, i.e., competence and authenticity. In some ways, self-governance is intertwined with self-determination, which is built on external conditions for autonomy. Next, we turn to this second dimension of autonomy to further develop our argument.

Energy justice and smart metering: self-determination for whom?

In this section, we will use the example of smart metering to show how a technology aimed at behavior change can enhance the degree of self-determination for some while restricting it for others. Smart metering is considered a key technology in the energy transition, since it can facilitate a reduction in energy consumption by visualizing energy use and connecting it with daily household practices (Milchram et al., 2018). Although smart meters may enable some users to gain more control over their energy use, several scholars have highlighted that the benefits of smart metering are not distributed in an equitable way (Milchram et al., 2018; Powells & Fell, 2019; Milchram et al., 2020). Indeed, some people have more digital or technical skills and more flexibility in organizing their lifestyles than others. To counteract this, some energy companies in the UK chose to focus their pilot projects for smart metering primarily on low-income communities (Michalec et al., 2019). In some cases, families could even be forced to install a £400 smart meter (Milchram et al., 2018).

We argue that this strategy undermines self-determination and thereby fails to contribute to energy justice. The main reason is that, by forcing people to install a smart meter, energy companies impose a certain goal (efficient energy use), including fixed ways to reach this goal (constant real-time data provision and feedback).

Such a technocratic rollout of smart meters infringes on people’s self-determination, because the technology does not provide the opportunities for people to make their own choices regarding the use of the technology. Even if the investment of £400 would eventually be outweighed by reductions in people’s energy bills, the infringement of self-determination remains problematic since it fails to recognize low-income people as people who are capable of making their own decisions. As a result, it is likely that the installation of the technology has a counterproductive effect. Indeed, a device that constantly confronts low-income people with their high energy bills is likely to increase their stress levels, which may further discourage them from changing their behaviour. An alternative rollout of the technology, which puts in place support mechanisms such as financial compensation and personal advice to energy poor households, would take into account the needs and wishes of different users, thereby respecting people’s self-determination.

AI chatbots for chronic disease management limit self-authorization

AI chatbots for chronic disease management have gained increasing attention from researchers and healthcare providers. These AI technologies promise to enhance patient autonomy by allowing them to understand and self-manage their bodies and conditions better (Haque et al., 2023). Chatbots are employed to educate users on their condition, encourage behavior change around lifestyle choices, and help manage stress related to the disease (Kurniawan et al., 2024). However, they often neglect the self-authorization dimension of autonomy. As mentioned earlier, this dimension involves regarding oneself as having the normative authority to be self-governing and self-determining. In other words, it means seeing oneself as authorized to exercise control over one's life and identity-defining commitments.

The problem with these AI chatbots in healthcare is that they are built upon a technocentric vision of the body, which can undermine self-authorization in several ways (Ho, 2019). These technologies turn bodily functions into data points to be fed to an algorithm, often ignoring the bodily knowledge that may be incongruent with this format. This approach fails to recognize cultural, contextual, and individual differences in how people understand their bodies and conditions.

It undermines self-authority's social recognition because personal beliefs about how one's body operates are systematically sidelined. For example, a diabetic patient's lived experience of how certain foods affect their energy levels might be dismissed if it doesn't align with the chatbot's algorithmic predictions based on blood glucose measurements alone. This technological mediation creates a situation where the authority over one's body is subtly transferred from the individual to the system, diminishing the patient's standing as an autonomous agent within their own healthcare journey.

Furthermore, it drives the need for the patient’s body to be in a constant state of “cured” or towards being cured, which, when we discuss long-term chronic illnesses, is not the norm. For individuals with long-term illnesses where management rather than cure is the realistic goal, this misalignment can damage self-trust and self-esteem when improvements don't match the implied expectations.

Conclusions 

Different technologies and their impact on human autonomy were explored in this article. In applying Mackenzie's (2014) multidimensional framework of autonomy, we've examined how various technologies can either fulfill or infringe upon different dimensions of autonomy, often simultaneously and to varying degrees. The case studies presented demonstrate that autonomy is indeed multidimensional and context-dependent. The impact of technology on autonomy varies across different dimensions, affects different people in different ways, and operates to varying degrees. This understanding has important implications for technology design and implementation.

Moving forward, designers and policymakers should approach technology development with this multidimensional view of autonomy in mind. Rather than asking simply whether a technology respects autonomy, we should ask: Which dimensions of autonomy does it support or undermine? For whom? And to what extent? Only by addressing these nuanced questions can we create technologies that truly enhance human autonomy.


References

  • Akbari, A. (2019). Spatial|Data Justice: Mapping and Digitised Strolling against Moral Police in Iran. SSRN Electronic Journal. https://doi.org/10.2139/SSRN.3460224
  • Christman, J. (2020). Autonomy in Moral and Political Philosophy. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/fall2020/entries/autonomy-moral/
  • Haque, A., Chowdhury, M. N. U. R., & Soliman, H. (2023, June). Transforming chronic disease management with chatbots: key use cases for personalized and cost-effective care. In 2023 Sixth International Symposium on Computer, Consumer and Control (IS3C) (pp. 367-370). IEEE. 
  • Ho, A. (2019). Deep ethical learning: taking the interplay of human and artificial intelligence seriously. Hastings Center Report, 49(1), 36-39.
  • Kurniawan, M. H., Handiyani, H., Nuraini, T., Hariyati, R. T. S., & Sutrisno, S. (2024). A systematic review of artificial intelligence-powered (AI-powered) chatbot intervention for managing chronic illness. Annals of Medicine, 56(1), 2302980. 
  • Mackenzie, C. (2014). Three dimensions of autonomy: A relational analysis. In Autonomy, oppression and gender (pp. 15-41). Oxford University Press.
  • Michalec, A., Hayes, E., Longhurst, J., & Tudgey, D. (2019). Enhancing the communication potential of smart metering for energy and water. Utilities Policy, 56, 33-40. 
  • Milchram, C., Hillerbrand, R., van de Kaa, G., Doorn, N., & Künneke, R. (2018). Energy justice and smart grid systems: evidence from the Netherlands and the United Kingdom. Applied Energy, 229, 1244-1259.
  • Milchram, C., Künneke, R., Doorn, N., van de Kaa, G., & Hillerbrand, R. (2020). Designing for justice in electricity systems: A comparison of smart grid experiments in the Netherlands. Energy Policy, 147, 111720. 
  • Powells, G., & Fell, M. J. (2019). Flexibility capital and flexibility justice in smart energy systems. Energy Research & Social Science, 54, 56-59.
  • Rathi, A., & Tandon, A. (2020). Capturing Gender and Class Inequities The CCTVisation of Delhi. In Digital Development Working Paper Series The Urban Data Justice Case Study Collection. https://doi.org/10.2139/ssrn.3705563.


About the authors

Linde Franken is a PhD candidate in environmental political theory at the philosophy section of the University of Twente. Her doctoral project explores the normative foundations and implications of the emerging framework of planetary justice in the context of energy transition technologies. The project combines conceptual and empirical work and draws on a combination of theoretical perspectives, including theories of justice, design for values, and virtue ethics.

Udipta Boro is a geographer and urban planner currently working as a PhD researcher at the University of Twente, The Netherlands. His PhD research, commenced in 2021, investigates how socio-political and spatial factors shape and operate within AI-powered urban video surveillance systems. 

Ariane Lucchini is a PhD candidate at TU Delft’s Faculty of Design Engineering. In her work, she explores the sociotechnical adoption of emerging technologies, aiming to foster more inclusive societies. Her current research focuses on co-envisioning generative AI technology with vulnerable and marginalized communities. She is particularly interested in addressing gaps in healthcare systems by examining power structures, exploring gender dynamics, and employing participatory research methods.

Catalina Lagos is a PhD candidate at TU Delft researching the intersection of generative AI and social equity. Her work examines how social biases emerge in workplace interactions. Through feminist principles, she explores how AI systems can foster critical self-reflection and transformation.