Part of the
4TU.
Ethics and Technology
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

When The Wild Robot Spoke Animal

Rethinking Democratic Participation through Design
13/04/2026

In The Wild Robot, a personal assistance robot called Roz washes ashore on an island after a shipwreck. Designed to serve humans, she instead finds herself in the wilderness searching for a task. Her attempts to serve and assist the animals on the island lead to chaos and fear.

“Aggression detected. My communication package includes strategies for conflict resolution.”

A skunk screams at the robot, after which Roz reacts:

“Your dialect is not in my databanks. Rozzum 7134 will sort out this language barrier in no time. Activating learning mode.”

Roz sits in the forest day and night learning the language of the animals. At one point, she decodes it and speaks in the nonhuman language:

“Thank you for your patience while I deciphered your language. I am Rozzum 7134, ready to enhance your lives with integrated multiphase task accomplishment.”

The animals respond in fear. A moose runs up to her and launches her meters high in the sky.[1]

What appears as a fictional scenario, the idea of unsupervised learning algorithms attempting to decode animal communication, is actually being explored in reality. Projects such as Project CETI and the Earth Species Project are developing AI to detect patterns in large datasets of animal vocalisations, aiming to identify structures that could be assigned meaning to.[2][3]

While scholars increasingly warn about the ethical risks and potential harms of this technology[4], it also raises broader questions about the fundamental principles of our democratic society. If we develop the ability to understand other animal species, should a truly democratic society include their voices as part of the public that participates in democratic processes? Could technology reshape democratic participation so deeply that we might one day include, say, an elephant’s opinion in decision-making? While this sounds like paradise for the animal ethicist in me, I do not want to fall into the trap of technological utopianism, which assumes that technology can solve complex social and political problems on its own.

As Santoni de Sio emphasises in his book Human Freedom in the Age of AI, the internet was once celebrated as a tool that could democratise politics by spreading knowledge, increasing participation, and challenging elite power. Decades later, that promise remains largely unfulfilled. Technology tends to mirror existing social and political power structures. Unless these underlying power relations are fundamentally challenged, technology will enforce them rather than challenge them[5]. This raises an important question for AI for animal communication: even if we succeed in “understanding” other animals, who controls this technology, and who decides how their voices are interpreted and used? The invention of the technology does not automatically lead to their voices being included in meaningful ways, but may instead just reproduce existing oppressive structures and forms of domination.

Santoni de Sio further argues that focusing on the ethical risks of AI, or designing “AI for good,” can become a superficial exercise if technological development remains in the power of non-democratic entities. He calls for integrating political considerations into AI ethics, recognising that power operates through embedded social systems and institutions. According to him, much debate focuses on how AI increases inequality and harms democracy, but far less is given to how technology could be designed to support it. Democratic processes become more and more mediated by technology, thus participation can be consciously designed for[5]. Existing literature on the ethical risks of AI for animal communication remains limited at this time, and does call for further scholarly attention. Yet, focusing on risks alone is insufficient. We cannot expect meaningful change in the inclusion of the more-than-human unless we begin to imagine and define what such a future could look like.

For Dewey, democracy is not primarily about abstract deliberation, but about “social experiments” through which people collectively explore and reshape their world. Democratic values emerge through practice, interaction, and ongoing inquiry.[6] From this pragmatist perspective, democracy is a continuous process of testing and redesigning social arrangements, including through the creation of artefacts, systems, and services that function as sites of inquiry.[5] In this sense, AI for animal communication can be understood as a social experiment, in which technology functions as a site of inquiry that reshapes human-animal relationships and raises new questions about, for example, democratic participation.

Critical Design, as developed by Anthony Dunne and Fiona Raby, is a design approach that is not focused on solving problems, but on using (speculative) artefacts to question assumptions and explore possible futures. It highlights that technology is shaped by social, political, and economic forces that influence how it is imagined and used. In this approach, design becomes a way of asking questions and opening up alternative ways of thinking[7].

Several scholars challenge the idea of truly “decoding” another species communication, arguing that computational approaches overlook embodied communication and context-dependent meaning-making[8]. From a Critical Design perspective, this limitation is not an obstacle to overcome but a productive site of inquiry. AI for animal communication can, for now, be understood as a speculative artefact that challenges anthropocentric assumptions about language and inclusion. It exposes the limits of human concepts in making sense of non-human forms of life. Maybe the point is not to fully “translate” animal language into human language. What matters is that attempts to do so open up space to rethink, among other things, what participation and democracy could look like beyond the human.

The Wild Robot playfully imagines a multispecies future with technology, and even as a work of fiction, we can learn from its lessons. Roz is created by humans and for humans, with abilities that are not suited for understanding or assisting animals. Her presence causes complete chaos in the animal community. It visualises the impact of human-centred design, where success is defined by productivity, efficiency, and measurable outcomes. This contrasts with the animals’ embodied and situated knowledge, which cannot easily be captured through measurement. 

Over time, however, she overrides her programming through learning from the animals. She adapts to the wilderness and becomes a “wild” robot. A new form of coexistence and collaboration emerges that draws on both computational and embodied knowledge. It creates a more inclusive form of participation in the community across species. Together, the community even resists a human-made robot invasion.

When the geese arrive at a meadow where the other (non-wild) robots are working, the robots label them as “contamination, animal infestation” and immediately attempt to eliminate them. This stands in sharp contrast to Roz’s transformation, in which she comes to recognise the animals as individuals; the other robots reduce them to categories to be controlled or removed. In a way, this visualises how technological design determines who is recognised as a participant and who is excluded. Roz breaks away from these power structures, allowing for a form of multispecies flourishing (including Roz).

It is, of course, a fictional story that flirts with technological optimism, yet it never fully resolves into a technological utopia. I will not spoil that ending, though. The story inspired me to see (or rather feel?), as Donna Haraway argues in A Cyborg Manifesto[9], that while technology tends to reproduce existing power relations, it can also open up new possibilities when it is reconfigured, resisted, or reimagined. Studying AI for animal communication, I sometimes wonder whether the big question “Do other animals have language structures like humans?” really matters that much. In the end, the conversations it sparks about including other animal species in our moral circle may be just as important, if not its very purpose.

References

Header image: still from The Wild Robot 1

[1] Sanders, Chris. 2016. The Wild Robot. New York: Little, Brown and Company.

[2] Project CETI. n.d. “About.” Last accessed March 3, 2026. https://www.projectceti.org/about.

[3] Earth Species Project. n.d. “What we do.” Last accessed March 3, 2026. https://www.earthspecies.org/what-we-do/technology.

[4] Ryan, Mark, and Leonie Bossert. 2024. “Dr. Doolittle uses AI: Ethical challenges of trying to speak whale.” Biological Conservation 296 (July): 110648. https://doi.org/10.1016/j.biocon.2024.110648.

[5] Santoni de Sio, Filippo. 2024.“Design for democracy: Deliberation and experimentation” in Human Freedom in the age of AI. Routledge: London.

[6] Dewey, John. 1988. “Challenge to Liberal Thought” in John Dewey: The Later Works 1925-1953, Vol. 15. Carbondale, IL: Southern Illinois University Press.

[7] DiSalvo, Carl. 2022. Design as Democratic Inquiry: Putting Experimental Civics into Practice. Cambridge: The MIT Press.

[8] Zengiaro, Nicola. 2025. “For a semiotic of opacity: the role of biosemiotics between AI and animal communication.” Chinese Semiotic Studies 21(4): 567–594. https://doi.org/10.1515/css-2025-0009.

[9] Haraway, Donna J. 1985. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs, and Women: The Reinvention of Nature, 149–81. New York: Routledge.