The rapid rise of the use of Artificial Intelligence has impacted the way bureaucracy functions. Decision-making procedures that were previously executed by humans are increasingly replaced by AI decision-making (Presuel, Sierra, 2024, 1). A recent example is the use of AI in the border checks of the European Union (Fink, 2024, 1). The European Travel Information and Authorization System (ETIAS) uses AI to label prospective travelers as ‘risky’ or ‘not risky’ using an algorithm that compares their personal data against ‘specific risk indicators’ defined by Frontex (ibid., 7). People labeled as ‘not risky’ are granted a travel authorization, while people labeled as ‘risky’ get forwarded to a human agent to determine whether they can travel or not (ibid., 8). This is just one example where AI is being used in high-stakes situations to make the bureaucratic process more efficient. While there seems to be a lot of potential in terms of efficiency, there are also a lot of risks.
In this blog, I will research different views on the use of AI in bureaucratic decision-making. I will first propose a Weberian perspective on the potential of AI. Afterward, I will contrast it to Dan McQuillan’s view in ‘Resisting AI: An Anti-fascist Approach to Artificial Intelligence’ (2022), eventually arguing that the use of AI in these bureaucratic processes does more harm than good based on the empirical findings of racial bias and automation bias.
The Potential for AI in Weberian Bureaucracy
For Max Weber, bureaucracy is a machine that has to be isolated from the political arena (Newman, Mintrom, O’Neill, 2022, 7). This means that the influence of political parties is kept outside of public organizations to keep the neutrality of the organization as merely an executor of public policy. The bureaucrats are dedicated to executing their specific role in the process, keeping personal preferences and interests out of their considerations. Bureaucrats are ideally experts in their field and will always act in accordance with the rules to benefit the system (ibid). According to Weber, such a system has multiple perks. If the rules are strictly adhered to, the output of a bureaucracy will be stable and predictable. Moreover, the neutral bureaucracy will be protected against forms of corruption. Public resources will be distributed fairly and justly, according to the rules. Finally, according to Weber, his perfect bureaucracy would lead to more rational decision-making (ibid., 8). It is important to stress that Weber’s bureaucracy is to be seen as a machine where every part is operated by a bureaucrat. By a strict adherence to rules and commitment to neutrality, bureaucrats are responsible for keeping the machine running as intended.
The metaphor of bureaucracy being a machine may make it easier to see how one might think that AI can streamline bureaucratic processes. However, AI can enhance current bureaucracies not only by increasing efficiency, but also by increasing the bureaucracy’s ‘authority over information’ (ibid). AI that incorporates machine learning has the capability to analyze huge datasets and use them to give appropriate advice to a human bureaucrat during a decision-making process. This can make processes exponentially faster, and speed is a known bottleneck for a good functioning bureaucratic system. Also, AI can take over some tasks done by humans completely. The promise of AI systems would be to reduce costs, human error, and to help bureaucrats make more neutral, unbiased decisions (Presuel, Sierra, 2024, 2). On the surface, it seems that this would fit Weber’s ideals on multiple levels. Formalizing and automating processes could help to follow the rules more neatly; the machine-like qualities of a bureaucratic system would be enhanced by taking away an unpredictable, biased human component (Newman, Mintrom, O’Neill, 2022, 8).
Dangers of Using AI in Bureaucracy
Dan McQuillan agrees that AI seems to fit the logic of bureaucracy neatly, but sees potential harm exactly in that property. He argues that AI enhances the bureaucratic order in a way that makes the bureaucracy more susceptible to fascism and the reproduction of pre-existing biases and stereotypes (McQuillan, 2022, 66). This is, of course, worrying, especially in the context of the aforementioned border technologies, for example. For McQuillan, danger lies exactly in the fact that AI can reproduce these injustices, while also laying claim to being an objective and neutral tool (ibid., 60). Precisely, the neutrality that bureaucracy claims to have (ibid).
McQuillan also sees analogies between bureaucracy’s functioning and AI’s decisions not being explainable. According to McQuillan, the opacity of AI puts more distance between the people and the bureaucracy (ibid., 60). This fits common criticisms of bureaucracy as being opaque and impossible to understand from the outside such as thematized in Kafka’s novel The Trial (1925). People do not understand why the system of rules treats them the way they are treated, and unexplainable AI amplifies this effect, according to McQuillan.
Moreover, McQuillan warns us that AI might increase the amount of thoughtlessness within bureaucracy. He builds on Hannah Arendt’s conception of thoughtlessness in bureaucracy from Eichmann in Jerusalem: A Report on the Banality of Evil (2006). Arendt used thoughtlessness to characterize how strict rule-following within a bureaucratic system without any critical individual attitude can contribute to the worst evils, like the ones in the Second World War (McQuillan, 2022, 63). AI lends itself to be this thoughtless bureaucratic agent without any critical attitude, which could intensify existing forms of bureaucratic violence. For this reason, McQuillan warns us for the embedding of AI in bureaucratic systems during the current rise of fascist thought.
I agree with McQuillan’s arguments, even though they sound quite radical. While this blog is too short to fully examine his arguments, I want to present two kinds of empirical findings that seem to speak in favor of his position. First of all, McQuillan is right to question the unbiasedness of AI-decision making. Multiple studies have shown that the use of AI can lead to unfair discriminatory outcomes such as the amplification of racial stereotypes in generative AI (Ferrara, 2024). While there is work being done on mitigating these effects, they have not been fully successful yet. Especially in high-stakes situations such as border checks, I think the potential for harm is too big to rely on an AI system.
Secondly, there seems to be good empirical evidence that supports the claim that AI increases thoughtless behavior. For example, research has been done on automation bias with human-AI decision-making, the phenomenon that humans tend to over-rely on AI in making decisions, even when the AI contradicts their own judgment (Romeo, Conti, 2025). AI recommendations have proven to be very difficult information sources to judge and compare to other sources. This has consequences on the trustworthiness of a human who oversees an AI agent, as the human’s critical judgment can be influenced by automation bias, which makes the bureaucrats more similar to the thoughtless bureaucrats that McQuillan is describing.
Conclusion
In this blog, I researched the potential and harms of the use of AI in bureaucracy. I contrasted a Weberian perspective on the potentials of AI in bureaucracy against McQuillan’s view on the use of AI in bureaucracy. Eventually, I sided with McQuillan’s critical view based on two main empirical phenomena, namely findings of automation bias in human-AI interaction and racial biases in generative AI. Without a good solution to these problems, the potential harm is too large.
Bibliography
Arendt, H. (2006). Eichmann in Jerusalem: A Report on the Banality of Evil. Penguin Publishing Group.
Cetina Presuel, R., & Martinez Sierra, J. M. (2024). The Adoption of Artificial Intelligence in Bureaucratic Decision-making: A Weberian Perspective. Digit. Gov.: Res. Pract., 5(1), 6:1-6:20. https://doi.org/10.1145/3609861
Ferrara, E. (2024). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
Fink, M. (2024). Robo Swarms and Polygraphs: The Future of European Border Management and its Costs (SSRN Scholarly Paper No. 4883130). Social Science Research Network. https://papers.ssrn.com/abstract=4883130
Kafka, F. (2006). Der Process: Roman (1925). BWV Verlag.
McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence (1st ed.). Bristol University Press. https://doi.org/10.2307/j.ctv2rcnp21
Newman, J., Mintrom, M., & O’Neill, D. (2022). Digital technologies, artificial intelligence, and bureaucratic transformation. Futures, 136, 102886. https://doi.org/10.1016/j.futures.2021.102886
Romeo, G., & Conti, D. (2025). Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI & SOCIETY. https://doi.org/10.1007/s00146-025-02422-7
Author Bio
Gwan Yu Tjiook is a student of the Master of Logic (ILLC) and the Research Master's in Philosophy (Universiteit van Amsterdam).