Introduction
Today, it is difficult to navigate through social media without ending up on some advertisement for a new AI tool for enhancing your productivity. These tools are usually marketed under two significant and complementary narratives. On the one hand, they are a call to boost efficient workflows and enhance the organisation and creativity of tasks. On the other side, they also suggest that through their use, you can do âthe jobâ faster and quicker, and as a result, they can give you more time to focus on âwhat really mattersâ.
It is illustrative to quote directly some slogans from some of the websites of the companies that produce these tools. For example, MotionApp, an AI planning platform, says that by using its service, you can âbe 137% more productiveâ and that âMotion protects your weekendsâ because â[it] plans your work automaticallyâ. Or, Notion AI, an integrated AI assistant that âhelps you to do more in less timeâ and âputs tedious tasks on autopilotâ. And, as a last example, Obsidian, a note-taking software that declares to be âa second brain, for you, foreverâ.
In other words, these new AI tools allegedly perform your work more efficiently, but they can also free you from your work sooner and, therefore, promote your autonomy by allowing you to focus on what is really important. But do these technologies really promote human autonomy and the promise for society to flourish, or is reality a little more complicated?
First, it seems pertinent to remind us that while AI might have the objective to develop a certain form of automated cognition, this does not imply that it is connected to cognitive autonomy. The current promise of AI productivity tools is to enhance human autonomy by delegating cognitive tasks to the AI. Yet, it is unclear why autonomy is acquired by delegating cognition.
The tension seems to emerge from a confusion between âautonomyâ and âautomaticâ. As the latter generally refers to a device or technology working by itself and delegating its sovereignty, the former indicates some sort of self-governance in relation to one's surroundings. These new AI tools seem to miss a clear distinction between the âitâ and the âIâ, influencing the framing of the problem, thus setting new standards of what a good user should be. Moreover, this normalisation is offered to people as a technological solution to respond to existential crisis and overburdening.
But why is this the case? Why are we missing this nuanced yet crucial distinction between autonomy and automatic? To answer this question, we must look at how (Western) culture has historically imagined and described bodies and minds, reflecting on the types of visions and narratives they lead to. But before this, it is appropriate to briefly highlight which value systems and through which design approaches these AI tools are developed. Doing this allows us to grasp the normative understanding of what a person should be in these AI productivity tools and show how autonomy takes a new meaning.
Frictionless: Designing for automatic users
These AI tools share many overarching commonalities. However, one that we find particularly interesting to reflect on is frictionless design. Kemper (2024) defines it as a pervasive technological design philosophy that aesthetically promotes user-friendliness, connectivity, and optimisation.
Yet, ideas are not neutral; they have a place and a time from which they come to be. Therefore, it is important to reflect on which cultural values and ideologies frictionless design emerged from. For this, we need to look at Silicon Valley. Before being an innovation hub, Silicon Valley is a liberal economic project. As Kemper points out, this implies that concrete economic purposes, such as market profit, are behind the narrative of reducing friction in digital technologies to enhance the user experience. This directly shapes the evolution of the user interface, making it look like an upgrade while increasing the dependency; an example is the transition from the classic mobile phone to the smartphone. This is noticeable by questioning the role of values such as convenience and immediacy that are framed as positive for the user, but are connected to companiesâ profit models that have these services.
Frictionless design emerges as an aesthetic logic that materialises capitalistic relationships of exploitation through economic organisational models such as surveillance capitalism and platform capitalism. What Silicon Valleyâs gurus have been champions in doing for years is creating new markets and forms of capitalisation by using scientific discoveries and technological breakthroughs to acquire an economic advantage. Details about these processes can be found in several publications on innovation, such as the 2014 book by M. Mazzucato, The Entrepreneurial State.
What we want to point out with this is that these new AI tools and design philosophies as frictionless design are all connected to the bigger phenomenon of the so-called attention capitalism, where the attention of the users becomes a space of capital exploitation by subjecting the brain dopamine and reward systems to market profitability by intervening on the design features of the platforms. Well-known examples are the most classic social media platforms and the introduction of gamification within platform design and interaction. Â
Following this economic project, to be profitable, first of all, the user needs to become predictable to make profit out of this predictability. However, to maximise prediction, friction must be removed from the ongoing relationship between users and the interface. This hints at how the persuasiveness of these apps of wanting to enter the organisational spaces of daily lives is guided by the underlying desire of wanting to make predictable â and therefore, automatic â user behaviours following the dominant economic values system that these apps belong to.
Through the example of frictionless design and by situating it in its broader economic project, we see how, rather than increasing the user's autonomy, the suggestion is that these apps attempt to make users more predictable and automatic. But where does this vision of an automated persona come from? For this, we need to leave Silicon Valley and jump back 350 years, engaging with RenĂ© Descartesâ ideas of bodies and minds.
Sociotechnical imagination: Cartesianism between past and present
A powerful way to think of the continuity between Descartes' ideas and how they are still influential today is the notion of sociotechnical imaginaries. The concept was developed by Jasanoff and Kim (2019), referring to the ways in which we (as a society and individuals) imagine the developments of science and technology that contribute to the flourishing of humans, animals, and planets.
Yet, this is important: imaginations do not come from anywhere. Rather, they are culturally situated and sedimented, creating continuity between past and present in the ways of thinking and relating to the world. Descartes has notoriously been an influential figure in the history of (Western) modern thinking. One of his most famous formulations is the mind/body dualism distinction; simply put, it entails that the mind and the body are seen as being ontologically independent and without any reference to one another. This has led to functionalistic distinctions, where the mind was seen as the location of reason and the body as an operator â a machine â subjected to the mind's commands.
This conceptualisation is not only theoretical, but it also entails a normative understanding of what bodies and minds should do. Contemporary cognitive and neuroscientific studies have weakened the descriptive aspects of dualism by showing how minds and bodies are integrated as part of a unity, but they have succeeded less in criticising the normative significance of what is a âgood bodyâ and what is a âgood mindâ, and the relational interplay between the two.
Van Grunsven (2024) makes this the case when discussing the normative imagination behind technologies designed for people with disabilities. She contends that people who have disabilities are often described as having malfunctioning bodies, and technologies have the role of adjusting them. Yet, she argues that this is a reductive and highly normative understanding of what a body is, leading to forms of technological fixation.  The issue is that rather than seeing minds, bodies, and environments as relational and adaptable elements, the way that most technological solutions are developed sees the body under a unique view that highly normalises what a good body should do, having direct consequences in the way society is organised and designed.Â
If the good body is understood as a machine, then it must be automatic because a good machine is one that is able to automatically perform its work, delegating sovereignty. This, however, is distinct from a body with autonomy capable of self-governing while relating to its environment. This Cartesian sociotechnical imagination brings us back to our main case of discussion: the new AI tools for enhancing work productivity with the promise of gaining back autonomy for what really matters.
These new tools operated under a logic of replacement informed by a Cartesian imagination. This is because the metaphor of the machine is extended to cognitive activity with the promise of predictability for aligning with the interest of the economic system in place. This, however, implies that these new technologies are also setting new standards, framing the "well-functioning user" as one who is productive and performs their work effectively; rather than enabling self-governance and reflexivity on one's own position.
We see how technological design and the imaginative views on which they are based are highly intertwined with established lifestyles, which feed and enrich the economic model on which corporations depend. In other words, rather than gaining autonomy, these technologies seem to be based on the promise of automating people's lives, taking away the burden of cognitive tasks and the responsibility of choice from users in exchange for sovereignty over one's actions.
Conclusion: giving up on autonomy in the name of autonomy
We started this short essay with the following question: Do these new AI tools for enhancing productivity and our autonomy, which we are so often advertised with, really do what they promise? After our reflection, we are doubtful that this is really the case. The main issue that we pointed out is that the very notion of autonomy that these tools promote is highly susceptible to automation rather than autonomy. Whether this confusion has been deliberate or not is a question that deserves a space on its own. However, we have shown how the implications are connected to clear economic projects. We did this by putting together the phenomenon of frictionless design and Cartesian sociotechnical imaginaries. We have shown that the current productivity tools can be read as setting new (normative) standards of what a good user is, which is far from an understanding of autonomy based on self-governance and relationality.
Putting these two ideas in relationship with each other has tentatively revealed a connection between the political economy to which technologies serve and the philosophical assumptions that our imagination reveals (even if they might be theoretically abandoned). Itâs not Descartesâ fault, but ours, and we do not call into question the ways of thinking that we inherit through our material culture. Â And acknowledging this is the first step for a political call to reappropriate one's autonomy.Â
References
Van Grunsven, J. (2024). Disabled BodyâMinds in Hostile Environments: Disrupting an Ableist Cartesian Sociotechnical Imagination with Enactive Embodied Cognition and Critical Disability Studies. Topoi. https://doi.org/10.1007/s11245-024-10080-5
Kemper, J., & Jankowski, S. (2024). Chapter 18 Silicon Valleyâs Frictionless Future: The Design Philosophy of Frictionlessness. In De Gruyter eBooks (pp. 287â302). https://doi.org/10.1515/9783110792256-018
Kemper, J. (2024). Frictionlessness: The Silicon Valley Philosophy of Seamless Technology and the Aesthetic Value of Imperfection. Bloomsbury.
Mazzucato, M. (2014). The entrepreneurial state: debunking public vs. private sector myths. Choice Reviews Online, 51(06), 51â3359. https://doi.org/10.5860/choice.51-3359.
Williford, D. (2019). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power ed. by Sheila Jasanoff and Sang-Hyun Kim. Technology and Culture, 60(4), 1110â1112. https://doi.org/10.1353/tech.2019.0113