In the flood of digital content we navigate daily, recommender systems shape our choicesâfrom Netflix shows to Amazon purchases, from TikTok feeds to Spotify playlists. Yet their very name, ârecommender systems,â has a precise origin: a 1997 special section in Communications of the ACM guest-edited by Paul Resnick and Hal R. Varian.
In their introduction, the editors used the term recommender systems to designate computational systems that help users identify items of interest by drawing on evaluationsâexplicit or implicitâfrom other users or from item features. The systems were framed as tools for navigating large information spaces by suggesting relevant items. This editorial moment marks not merely a prominent use, but an explicit act of namingâa conceptual baptism that has shaped how these technologies are understood.
This blog note argues that the success of the term was not merely terminological. By framing these systems as recommendations rather than mechanisms of influence or preference formation, the label helped stabilize a field while shaping the normative interpretation of the technology itself.
From Tapestry to Terminology: Pinpointing the Birth
Recommender systems did not emerge from thin air. Their roots lie in the early 1990s, when the rapid growth of email lists, Usenet groups, and online forums created a new problem: how to filter overwhelming streams of digital information. One of the first attempts to address this challenge was Tapestry, a system developed at Xerox PARC and presented in 1992 by David Goldberg, David Nichols, Brian Oki, and Douglas Terry in the paper âUsing Collaborative Filtering to Weave an Information Tapestry.â
Tapestry introduced what its authors called collaborative filtering. Instead of relying solely on keywords or content analysis, the system allowed users to record reactions to documentsâsuch as annotations or repliesâwhich others could then use to filter information. In effect, the system treated human judgments themselves as signals for navigating information overload.
During the following years, several related systems appeared. Projects such as GroupLens, PHOAKS, SiteSeer, Fab, and ReferralWeb explored different ways of using user behaviour, evaluations, or network relationships to guide people toward relevant information. Despite addressing different domainsâfrom Usenet discussions to web bookmarksâthey shared a common aim: using traces of collective activity to help users discover relevant items.
By the mid-1990s, however, this emerging family of systems lacked a unifying conceptual label. Some were described as collaborative filters, others as social information filters or referral tools. In their 1997 introduction to the special section of Communications of the ACM, Resnick and Varian proposed the broader term ârecommender systems.â
Their proposal expanded the concept. Systems might generate recommendations even when users did not explicitly collaborate with one another, and recommendations could highlight interesting items rather than merely filtering out irrelevant ones. The new term therefore gathered a heterogeneous set of techniques under a single conceptual umbrella.
This editorial intervention did more than summarize existing research. It stabilized a new technological category and mapped its design space. Rarely in technological history can we date a termâs conceptual âbaptismâ so precisely: not a gradual linguistic drift, but a deliberate act of naming that simultaneously defined a field and framed how these systems would be understood.
Why This Precision Matters: Conceptual Ethics in Action
Philosophers of technology, from Langdon Winner to Bruno Latour, remind us that technical terms are not neutral labels; they shape how technologies are understood and situated in the world. The expression recommender system naturalizes a benign advisory role: systems that help users navigate information by offering suggestions. It foregrounds recommendationâa practice associated with trust and expertiseâwhile backgrounding questions of influence and power.
Yet these systems do more than suggest. They curate informational environments, amplify certain signals, and shape patterns of attention and preference through feedback loops. Alternative descriptionsâsuch as âpersuasion engines,â âattention governors,â or âpreference shapersââwould foreground different aspects of the same technologies.
Resnick and Varianâs terminology reflected the optimism of the early web, before concerns about manipulation and algorithmic influence became prominent. But this framing also helped obscure ethical issues that later became central: filter bubbles (Pariser, 2011), ideological polarization (Bakshy et al., 2015), and the effects of algorithmic mediation on autonomy (Susser et al., 2019). By presenting outputs as recommendations, the term invites trust rather than scrutiny.
This illustrates conceptual ethics in practice. As Sally Haslanger (2020) argues in discussions of contested concepts, terminology can stabilize practices while shaping how they are evaluated. The label recommender systems helped consolidate a research fieldâappearing in surveys such as Breese, Heckerman, and Kadie (1998) and later in Jannach et al.âs (2010) handbookâbut it also framed the technology in ways that muted certain forms of critique.
Reimagining the Vocabulary of Choice
Pinpointing the 1997 baptismâand Tapestryâs earlier groundworkâis therefore more than archival curiosity. It reveals how categories that appear natural are the result of historically contingent acts of naming and classification.
Why did recommender systems prevail? The term succeeded partly through prestigeâintroduced in ACMâs flagship journalâand partly through flexibility, as it could encompass collaborative, content-based, and hybrid approaches. It also reflected the technical culture of the early web, emphasizing assistance and user empowerment.
For researchers and designers, the lesson is clear: naming is an interventionâit shapes how technologies are perceived, trusted, and contested. Why did recommender systems prevail? The term succeeded partly through prestigeâintroduced in ACMâs flagship journalâand partly through flexibility, as it could encompass collaborative, content-based, and hybrid approaches. It also reflected the technical culture of the early web, emphasizing assistance and user empowerment.
Conceptual labels shape how technologies are interpreted and adopted. Tapestry originally wove human judgments collaboratively; todayâs large-scale systems often centralize those judgments within corporate infrastructures. The success of the term recommender system illustrates how a concept can stabilize a field by framing a technology in a particular normative lightâas assistance rather than influence.
Recognizing this dynamic invites greater care in the vocabulary used to describe algorithmic systems. Alternative conceptual framingsâemphasizing pluralism, agency, or collective curationâcould orient design debates toward values other than engagement or clicks.
Resnick and Varian closed their editorial by calling for âcontinued technical innovation.â Twenty-nine years later, innovation must also include on-going conceptual scrutiny. The term recommender system did more than name a class of technologies: it framed them as benign helpers while downplaying their capacity to shape informational environments and preferences.
The lesson is therefore not only to name our technologies carefully, but to revisit and interrogate those names as the systems they designate evolve. When conceptual labels normalize systems whose social and political consequences are significant, critical genealogies become essential. The baptism of recommender systems reminds us that the names we give to technologies do not merely describe themâthey help shape the worlds those technologies will inhabit.
Â
References
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130â1132. https://doi.org/10.1126/science.aaa1160
Breese, J. S., Heckerman, D., & Kadie, C. (1998). Empirical analysis of predictive algorithms for collaborative filtering. Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, 43â52.
Goldberg, D., Nichols, D., Oki, B. M., & Terry, D. (1992). Using collaborative filtering to weave an information tapestry. Communications of the ACM, 35(12), 61â70. https://doi.org/10.1145/138859.138867
Haslanger, S. (2020). Tracing the Sociopolitical Reality of Race. In What is Race? Four Philosophical Views (pp. 4â37). Oxford University Press.
Jannach, D., Zanker, M., Felfernig, A., & Friedrich, G. (2010). Recommender Systems: An Introduction. Cambridge University Press.
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3), 56â58. https://doi.org/10.1145/245108.245121
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, autonomy, and manipulation.
References
Alexandru (Sacha) Mateescu is a PhD candidate in Philosophy at Paris 1 Panthéon-Sorbonne. He has an interdisciplinary background with degrees in Computer Engineering, Contemporary Philosophy and Philosophy of Science. His doctoral research focuses on recommendation algorithms considered as persuasive technologies, studied at the intersection of classical and modern rhetoric and their nature as adaptive systems. LinkedIn https://www.linkedin.com/in/alexandru-mateescu-1148b1