Prologue
Sara enters the room, hurriedly drops her belongings on the sofa, and rushes to sit at the table, where it awaits her. In her haste, she forgets to turn on the light, so the entire scene is illuminated only by the pale beam emanating from the center of the desk. A light, she thinks, that feels like a silent call to look inward.
She sits in front of the light, takes a deep breath, and, relieved, lets go of the question that has felt lodged in her chest all day. The question spills out clumsily, hastily, urgently. She needs an answer. Simply voicing it makes her feel a release, as though an invisible pressure has dissipated. She trusts that the answer will come soon.
She doesn’t know what kind of journey her question embarks on every time she sits in front of this light, nor from where exactly the wisdom that returns to her originates. But she imagines this journey as vast, almost cosmic, and at the same time intimate, confined to the four walls of this room, lit by a light resembling a candle in a dark night.
The silence that resides between the questions and the answers calms her. Is this what prayer feels like? she has sometimes wondered. She clicks “Enter,” and during the few seconds that pass before the answer appears, she feels as if her consciousness leaves her body, fully immersed in the white screen. When the space of the answer begins to fill with characters forming words, which become phrases, which shape entire paragraphs, she feels that these tiny, diligent signs bring her back to her body, as if someone were writing her from the inside.
Carl Gustav Jung defined archetypes as universal models representing fundamental symbolic structures within the collective unconscious. These figures are abstractions of signs, essential patterns that can be applied in different moments and contexts, offering an interpretative tool for understanding the content of human experience. Like an algorithm, archetypes do not contain all possible data but act as a necessary pattern to process and make sense of it.
In this text, we will explore the potential relationship between archetypes and algorithms in the contemporary world. We will ask whether these patterns share an underlying common logic and, if so, how these similarities might provide new perspectives for understanding intelligence in the context of machine learning, a technology driving the development of artificial intelligence (AI) and the evolution of human-machine interactions.
Introduction
The relationship between archetypes (as symbolic frameworks) and algorithms (as computational systems) seems, at first glance, an unlikely pairing. Archetypes emerge from mythological, psychological, and cultural narratives, while algorithms operate through mathematical models and data-driven reasoning. However, in this article we use the concept of “controlled equivocation” (equivocação controlada) as a lens to explore this relationship and turn it into a productive site of meaning-making.
Controlled equivocation, a term introduced by Eduardo Viveiros de Castro (Viveiros de Castro 1998) within the framework of Amerindian perspectivism, describes a productive misunderstanding that emerges when two distinct conceptual systems use the same words or categories but assign them fundamentally different meanings. Rather than dismissing this misunderstanding as an obstacle, controlled equivocation treats it as an intentional ambiguity and an opportunity for insight, recognizing that translation between radically different worldviews is never seamless.
In this sense, we aim to bring archetypes and algorithms into dialogue to provoke a controlled equivocation, treating both their alignments and misalignments as a site of insight, that is, a space where meaning can be negotiated and refined. We are not attempting a one-to-one translation for they are not equivalent systems. We take for granted the confusion this might entail, but instead of dismissing this confusion, we expect to control it, and use the tension to generate potential insights.
Archetypes represent universal symbolic patterns that emerge across mythologies, stories, and psychological frameworks (Jung 1968). On the other hand, algorithms represent patterns detected in data, or emergent relationships that are distilled into statistical models, thus potentially influencing everything from social media recommendations to predictive text. In this regard, while archetypes organize human meaning through recurring symbols and themes, algorithms organize human behavior through measurable trends and correlations.
Although they operate in radically different languages and frameworks, both archetypes and algorithms are tools for interpreting and predicting patterns of human behavior and meaning-making. By acknowledging this relation – which is also, as we said, an equivocation – and using it as a reflective tool, we pose questions that may initially seem out of place. Ultimately, we aim to explore what emerges when we view algorithmic outputs as modern archetypes, shaping collective behavior through recurring digital patterns, and to examine how far this perspective can take us.
Through controlled equivocation we aim for this work to serve as a catalyst for further inquiry, even as a playful exploration, to open a space where archetypes and algorithms, despite their apparent differences, can coexist and even intersect, potentially revealing new insights about both systems through their juxtaposition.
We will primarily explore two paths: first, in which ways algorithms mediate cultural meanings; and second, by drawing on common tropes in archetype interpretation – such as opacity, spookiness, and unsteadiness – we will examine how these concepts, operating through distinct yet intersecting logics, can provide insight into contemporary critiques of algorithms.
Two Mirrors
Contemporary algorithm criticism examines the societal, cultural, and ethical implications of algorithms, particularly those that underpin technologies like machine learning, recommendation systems, and artificial intelligence. Scholars and critics argue that algorithms are not neutral technical tools but deeply embedded in social and political contexts, shaping and being shaped by human behavior (Seaver 2018; Gillespie 2014).
Under this view, we can consider how algorithms, particularly large language models (LLMs), function as both cultural and technical artifacts, mediating collective knowledge in ways that reflect and influence societal norms. Much like the collective unconscious proposed by Carl Jung, algorithms process and organize vast amounts of shared information, capturing recurring patterns and themes that resonate across communities. This parallel invites us to examine how the patterns detected and reproduced by LLMs might serve as modern representations of archetypal structures, shaping not only cultural narratives but also human behavior and identity.
The collective unconscious, as defined by Jung, is a universal reservoir of symbols, archetypes, and shared experiences that transcend individual lives and are common to all humanity. Jungian archetypes are universal symbolic structures that manifest in myths, stories, and dreams, embodying fundamental themes such as the Hero, the Shadow, and the Mother. Similarly, LLMs are trained on massive datasets, extracting patterns of language, narratives, symbols, and structures from diverse sources such as books, articles, and social media. In both systems, knowledge emerges not from a single individual but from a shared, collective repository of information.
In this regard, it can be argued that both systems rely on preexisting patterns activated in response to a stimulus: a prompt in the case of LLMs or a dream or experience in the case of the collective unconscious. They can be seen as repositories of humanity, which encapsulate experiences, symbols, and collective narratives, albeit through different processes. The collective unconscious has evolved over millennia of psychological and cultural development, while LLMs have been shaped by massive data inputs in a relatively short time frame. Despite this difference, both systems reflect and reproduce universal patterns, respond to contextual stimuli, and require human interpretation to extract meaning from their outputs.
In fact, LLMs have demonstrated the ability to generate narratives that mirror traditional storytelling structures, including archetypal characters and plotlines. For instance, researchers have found that GPT-3 could identify and replicate classic character roles such as heroes, villains, and victims across various domains, including newspaper articles, movie plot summaries, and political speeches (Bamman and Smith 2021). This example illustrates how LLMs can both identify and reproduce recurring patterns in language, reflecting modern archetypes in their generated content. Through their training on vast datasets, LLMs function as a digital and statistical analogue to the collective unconscious, serving as a modern mirror that reflects the vast tapestry of the collective psyche.
More interestingly, these technologies can give rise to new archetypes, shaped by the novel forms of data and interactions they facilitate. The idea that algorithms can be culturally situated goes beyond their ability to merely reflect patterns of human behavior and types. It also encompasses how they can be used to actively shape and modify those behaviors. The distinction lies between viewing algorithms as external entities that exist within culture, merely influencing it – much like a rock alters the flow of a river – and recognizing them as integral components of cultural and social practices, with the capacity to reshape the very processes and practices they are embedded in (Hallinan and Striphas 2016).
The term “algorithmic culture” refers to how computational systems, like recommendation algorithms, classify, sort, and prioritize cultural content and how these processes influence user behavior and cultural practices. Hallinan and Striphas (2016) explain this through the example of the Netflix Prize, a $1 million competition launched in 2006 to challenge developers to improve Netflix’s movie recommendation algorithm.[1] While framed as a technical challenge, the competition also functioned as a cultural project, showing how culture is interpreted and mediated through algorithms. In this regard, the challenge was not just technical but also interpretive, raising questions about how cultural preferences are represented mathematically and how “taste” can be optimized algorithmically. At the same time, it showed how engineers, through algorithms, became arbiters of cultural value, challenging traditional roles held by critics and cultural institutions. Thus, the Netflix Prize underscored a broader societal shift where algorithms mediate cultural consumption and it exemplifies the growing entanglement of human agency, cultural interpretation, and computational processes.
The Spooky Nature of Algorithms
This interplay between human agency, cultural interpretation, and computational processes becomes particularly evident in the technical methods employed during the Netflix Prize. The technique in the recommendation algorithms – Singular Value Decomposition (SVD) – to analyze the dataset identifies and extract patterns at a level of abstraction far removed from human intuition. While mathematically precise, these patterns often result in categories that resist straightforward explanation in human terms.
At the time, Wired magazine described these results as “baroque mathematical combinations”-intricate, highly detailed patterns that made sense numerically but defied clear interpretation in words (Hallinan and Striphas 2016). The New York Times highlighted the “alien quality” of these results, suggesting that the algorithm revealed hidden insights into human preferences that even we might not consciously understand. Participants in the competition described the accuracy of the predictions as “spooky” because the algorithm seemed to “know” users’ tastes in ways that felt almost supernatural.
The Netflix Prize competition revealed that certain aspects of human cultural identity extend beyond what we can consciously recognize or categorize, encompassing elements perceptible only to machines. These aspects, as some authors have described in various contexts, can be understood as “prepersonal” or “incorporeal”. Félix Guattari’s concept of “prepersonal” or “incorporeal” aspects refers to intangible dimensions of identity that aren’t fully tied to individual human consciousness but emerge from collective patterns, flows, and relational dynamics (Guattari 1995). As he argues, “these emerging aspects of cultural identity contain profoundly ambivalent potentialities, and their relationship toward existing modes of personal and cultural identity is far from determined.”. This reinforces the distance between machine perception and human understanding, where algorithms can detect patterns in cultural behavior that are opaque or mysterious to humans. In this regard, human cultural identity, as reflected in datasets like those used in the Netflix Prize, goes beyond traditional human-centric interpretations.
Algorithms can detect and interact with these prepersonal dimensions – patterns of behavior, preferences, and cultural signals that humans might not articulate explicitly. The relationship between these emerging algorithmic insights and traditional notions of personal and cultural identity remains unsettled and evolving. In short, cultural identity, when mediated by algorithms, includes dimensions we may not consciously recognize but are nonetheless deeply influential in shaping preferences and behaviors.
Pleasing the Algorithm
Another significant way contemporary criticism challenges the traditional view of algorithms is by shifting from seeing them as rigid technical formulas to understanding them as “multiples” – unstable, context-dependent entities shaped by human practices and interactions. As we have seen, this entails that algorithms should be studied as sociotechnical systems, not isolated mathematical objects (Seaver 2017). A distinction is brought again here between algorithms in culture – that is, seen as distinct technical entities interacting with cultural systems – versus algorithms as culture, that is, as part of culture itself, enacted and shaped through human practices and social interactions.
Drawing from the anthropology of trapping (a field that explores contemporary systems of knowledge and organization, such as infrastructures and algorithms, through the conceptual lens of the “trap”) we gain insight into how epistemic and cultural dynamics can produce “encompassing, hard-to-escape cultural worlds” (Seaver 2018) in which we find ourselves entangled.
An entire domain of research known as captology emerged to investigate how computing technologies can be intentionally designed to influence users’ beliefs and behaviors. In more recent years, the term has been softened to behavior design to highlight its persuasive rather than coercive dimension, distancing itself from the more manipulative connotation of “capturing” implied by captology. Seen as persuasive technologies, such devices may be understood as traps, designed not to coerce overtly but to subtly shape behavior. Importantly, traps are not simply brutal or mechanical interventions; they are situated, embodied scenarios that establish particular relationships between actors, relationships that are often close, recursive, and intimate. Traps, in this view, are not standalone artifacts but relational systems embedded within specific epistemic, cultural, and material worlds.
As a relational system, they generate a dynamic that operates in both directions. For example, content creators often adapt their strategies to align with platform algorithms, employing specific hashtags, video lengths, and thumbnail designs to enhance visibility and engagement. This practice, known as “gaming the algorithm,” creates a feedback loop where creators influence the algorithm, which in turn shapes creator behavior. For instance, on YouTube, creators optimize video length and viewer retention to satisfy algorithmic preferences, aiming for higher average view durations to boost recommendations.
This dynamic illustrates how creators tailor their content to meet algorithmic criteria, while algorithms adjust to prevalent content patterns, fostering a continuous cycle of mutual influence. Such interactions significantly impact cultural trends, as algorithmic amplification can accelerate the spread of certain styles or topics, leading to increased uniformity in content (Pike 2023).
We can see here another intriguing analogy, for the way content creators adapt their behavior to please an algorithm mirrors how individuals interpret archetypes or tarot readings to align their actions or make sense of their experiences. In both systems, participants respond to pre-existing structures, whether symbolic (tarot/archetypes) or statistical (algorithms), to maximize resonance, success, or clarity and in this regard, it can be argued that there is a strategic alignment with the system.
By expanding on this idea, when someone draws a tarot card or identifies with an archetype, they often adapt their behavior based on its interpretation. For example, seeing “The Fool” might encourage someone to take a risk, while “The Hermit” might suggest introspection. Similarly, content creators learn to strategize based on algorithmic cues. They know certain trends, hashtags, video lengths, or emotional tones have higher chances of being amplified. In both cases, participants are not passive consumers. They adjust their actions to align with the perceived “rules” or tendencies of the system, whether mystical or computational.
In this regard, algorithmic systems amplify trends that gain traction, sometimes creating cultural moments (e.g., viral dances, memes). These trends, once boosted, develop a life of their own, moving beyond the algorithm into larger cultural discourse. Thus, tarot/archetypes and algorithms function as amplifiers: they surface certain ideas, trends, or narratives and allow them to gain visibility and cultural significance. In both cases, human action shapes the system as much as the system shapes human action, creating a dynamic feedback loop of influence and adaptation. Both serve as mirrors and amplifiers, not passive tools, but active participants in shaping culture and individual choices.
The Transparent and the Opaque
Archetypes can also illuminate modern critiques of algorithms by addressing the issue of transparency. Indeed, transparency has become a focal point of algorithm criticism, with researchers calling for explainable AI to mitigate the “black box” effect, where users and even creators cannot fully understand how decisions are made (Pasquale 2015). However, it has also been argued (Selbst et al. 2019) that focusing solely on transparency risks falling into the “transparency fallacy,” where symbolic gestures replace meaningful accountability. Algorithmic opacity is not a singular phenomenon but has multiple dimensions. Burrell (2016), for instance, identifies three distinct forms: (1) intentional secrecy, whereby corporations or states conceal algorithms to protect trade secrets or avoid regulation; (2) technical illiteracy, since programming knowledge remains specialized and inaccessible to most people; and (3) the inherent complexity of machine learning, where high-dimensional statistical models yield decisions that even experts find difficult to interpret. In this article, our focus lies primarily on the second and third forms. Nonetheless, any discussion of transparency and opacity must recognize these differences, as each form requires distinct strategies of response.
Returning to our play of parallels, this critique of transparency aligns intriguingly with the nature of the collective unconscious, which, like algorithms, operates as a black box – opaque and not directly accessible. Archetypes from the collective unconscious emerge indirectly through symbols, dreams, or artistic expressions, requiring interpretation to uncover their meaning. Similarly, while LLMs are technically accessible, their inner workings remain obscure, with correlations and patterns processed through opaque statistical methods. In both cases, the mechanisms behind the generation of meaning – whether an archetypal dream or an LLM-generated response – are complex and difficult to trace. This parallel suggests that the opacity of algorithms, much like the opacity of archetypes, may not always be a flaw but rather an intrinsic feature that needs careful interpretation to derive meaning.
While in the realm of modern algorithms opacity is often perceived as a problem, in symbolic interpretation systems – such as tarot, astrology, or similar practices – opacity does not undermine trust; it strengthens it, because these systems do not promise certainties but rather possibilities for interpretation. It is an opacity that is consensual and accepted, not imposed. Indeed, opacity itself operates as a necessary mechanism that activates the process of interpretation.
Along the same lines, a machine learning algorithm is a set of mathematical rules and procedures that enable a computer system to learn patterns from data, adapt, and improve its performance on specific tasks without being explicitly programmed for them; but the specifics of how algorithms work remain highly inaccessible, even to those who work directly with them. In the context of quantum computing but equally relevant within the realm of contemporary computing, it has been argued that their potential and the difficulty in understanding them both stem from its ability to carry out tasks beyond what humans can perceive. These contemporary cutting-edge fields extend human knowledge in ways we could never achieve alone, through processes we can’t completely grasp. Algorithms are inherently “black boxes” like quantum computers themselves; their effectiveness depends on the fact that their inner workings remain hidden from human observers (Schradle 2020).
This has led to a growing focus in algorithmic critique questioning the need – or not – to make algorithms “explainable” as a way to counteract the “black box” effect they often entail. What if, unlike systems where clarity is the ultimate goal, the opacity of archetypes has a functional value? These concerns have been raised in response to perspectives arguing for algorithmic transparency. Edwards and Veale (2017) explain that while the right to an explanation is an intuitively appealing solution to algorithmic opacity and bias, it is unlikely to serve as a comprehensive remedy for algorithmic harms. This stems from several factors. For instance, the legal definition of a meaningful explanation often diverges from what machine learning (ML) systems can technically provide. Additionally, there are technical limitations: ML explanations are constrained by the type of information being sought, the dimensionality of the domain, and the nature of the individual seeking the explanation.
Because of these challenges, critics warn of the risk of a “transparency fallacy,” where the emphasis on explanation rights shifts attention away from more effective solutions. The main point is that while the “right to an explanation” might carry an important symbolic value, it is both legally and technically insufficient as a standalone remedy. Other approaches can prove more effective, such as improved system design, robust institutional oversight, and leveraging other privacy provisions to address the broader challenges posed by algorithmic systems.
Along this line, we might ultimately argue that algorithms are not designed to be fully transparent; part of their richness lies in their ambiguity, a liminal space where meaning is generated. And just as the opacity of symbolic interpretation systems do not imply a lack of meaning – rather, they function as access points to multiple, shifting interpretations, shaped by the context and the individual interpreting them – algorithms can be understood as opaque systems of meaning, operating at a level of abstraction that challenges linear rationality.
On the other hand, efforts to audit or hold algorithms accountable often redefine the algorithms themselves, reshaping them into something “account-able” for specific stakeholders. They are not neutral technical exercises; they are deeply complex processes (social and political) that, rather than merely exposing algorithms as they exist, often redefine them to fit specific understandings of “accountability.” (Ziewitz 2016). This means that when stakeholders (e.g., governments, corporations, users) attempt to audit or hold algorithms accountable, the algorithm itself often changes in response to these demands. The act of making an algorithm “auditable” often involves simplifying, reframing, or restructuring it to fit within legal, regulatory, or social frameworks. Thus, transparency turns out to be, not about revealing hidden truths, but about methodically reconfiguring social interactions around algorithms. In this sense, it can be said that transparency efforts are performative acts, because they shape how people understand, trust, and interact with algorithms.
In summary, under this framework accountability and transparency are not about unveiling a pre-existing “true” algorithm but about constructing socially acceptable representations of how algorithms function. What matters is not the transparency of the inner workings (the algorithmic code or the “truth” of the tarot reading) but how the user interprets and interacts with the system’s outputs. A system can then operate within a space of opacity, where the process of generating meaning is not fully accessible to the end-user, and still aim for some sort of trustworthiness – which is not derived from transparency but from the system’s perceived effectiveness and the credibility of its mediators. Transparency (and opacity) isn’t about revealing (or not) the system’s inner workings, but about creating a certain arrangement around its functioning and outputs. Instead of asking, “Is the algorithm transparent?” we might ask why and how social relationships can be reshaped through this needed opacity?
Intelligence in Dark Times
In Moral Progress in Dark Times (2022), Markus Gabriel argues that contemporary society faces a crisis of meaning driven by moral relativism, fanaticism, and the erosion of universal ethical frameworks. He critiques the dominance of nihilistic and postmodern thought, which he sees as undermining the ability to address irrationalism and fanaticism effectively. From the moment theory becomes powerless to privilege one option over another, all beliefs seem equally valid. This has led to the precipitous dismantling of forms of understanding and values that were still capable of fostering a stable social order and, ultimately, brought what he calls “contemporary obscurantism”: a state in which we have granted full legitimacy to professions of faith, no matter how extravagant their content may be. Thought has deprived itself of the right to critique the irrational, legitimizing any discourse that attempts to access the absolute, so long as nothing in these discourses appears to claim or pretend to offer a rational justification for their validity.
The central question he seeks to address throughout his book is whether it is possible to confront obscurantism without falling into dogmatic realism. Aligned with this concern, we want to reclaim the concept of diplomacy as a renewed methodological and theoretical tool to address both dogmatism and fanaticism. Starting from the modest premise inspired by Viveiros de Castro that translation between radically different worldviews is possible and necessary, but never seamless nor exempt from equivocation, we aim to move beyond treating this as an obstacle and instead transform it into a productive space for insight and understanding.
Treating intelligence as an exercise in misunderstanding allows us to take the limits of our knowledge seriously, while avoiding the pitfalls of relativism. We draw inspiration from Isabelle Stengers’ concept of diplomacy (2005) to offer a counterpoint to the notion of intelligence and to propose a path for human empowerment in the face of machine intelligence and learning. For any diplomat worthy of the name, “limits” are not meant as restrictions but as boundaries beyond which something cannot pass. Rather than ignoring or dismissing these boundaries, a diplomat has to engage with them consciously, to make them explicit, and to use them as a space for reflection and critical thought.
In this sense, every expression of intelligence is an exercise in translation, and as such, needs to take edges seriously. As Stengers explains, “a diplomat will never tell another diplomat, Why don’t you simply accept this or that proposal”. Challenging something is relatively easy; it is always possible to challenge someone. However, a real challenge always has to consider that edges are involved, and there is no neutral, extraterritorial way to define what matters in the situation for each of the parties involved. For this reason, true diplomats should always fear whether they will be accused of betrayal when they return home. Diplomacy isn’t about fostering goodwill, achieving unity, or finding a shared language; nor is it simply negotiation between individuals open to compromise. Instead, it involves navigating the constraints imposed by the different commitments and ties people have. Diplomacy only becomes possible when all parties agree to hold back their legitimate reasons for conflict. Yet, while this mutual restraint creates the possibility for peace, it doesn’t guarantee it – peace ultimately depends on the effectiveness of the diplomatic process that follows.
And isn’t this, after all, a true expression of human intelligence? At a time when the systems we have created vastly exceed our cognitive capacities in many ways – proving our intelligence can no longer be about simply decoding systems or making them fully transparent – diplomacy emerges as a compelling and necessary path forward. Intelligence as diplomacy is less about discovering truths but about producing communication between edges. It is a process that acknowledges and works within the complexities of divergence, the opacity of systems, and the instability of borders, rather than attempting to eliminate them.
If our systems of knowledge have become – or perhaps always were – inherently opaque and unfathomable, then intelligence, understood as the ability to live with and navigate that opacity, is more essential than ever. We need to become like modern witches – invoking the power of systems whose vastness surpasses our understanding, yet without losing our grip on the situation. It’s not the control of an all-powerful monarch, but a control that is self-aware, grounded in the recognition of its own inherent limitations from the outset.
Like witches, diplomats are specialists in working through forces they neither fully control nor entirely comprehend, yet upon which they depend to achieve their aims. Both operate in liminal zones crafting conditions in which desired outcomes become possible. Their work is less about direct action and more about creating propitious environments. How can these figures, skilled in the art of shaping conditions, help us reimagine organizational practices?
Ritualistic-Algorithmic Governance
Organizations that operate through algorithmic and data-driven processes face a structural compromise in how accountability has traditionally been understood, insofar as those involved in designing and deploying such systems cannot fully understand or control the forces they unleash.
If there are intrinsic barriers to accountability and responsibility is structurally compromised (Nissenbaum 1996), it should be reframed, less like a clockmaker tending to a mechanism and more like agents skilled at working with forces whose inner workings are beyond their full comprehension.
Two figures can help here: the witch and the diplomat. The witch is responsible not because she directly controls the force, but because she creates the conditions under which it manifests, shaping the ritual space, selecting the ingredients, invoking at the right moment. In algorithmic terms, these “rituals” could be choices about data sources, system architecture, optimization goals, and deployment context.
The diplomat, on the other hand, accepts that no party in the negotiation can ever be fully transparent or act without self-interest. Their skill lies in navigating edges, translating without erasing differences, holding tensions without forcing premature resolution, and cultivating channels for communication even when mistrust or opacity remain. In algorithmic governance, this means fostering ongoing, multi-stakeholder conversations that can adapt to changing conditions and perspectives, rather than expecting a single definitive explanation or fix.
In both cases, the alternative to traditional accountability is responsiveness: a form of ongoing custodianship and attunement to the system’s behavior in the wild, where moral responsibility is exercised as a continual and relational readiness to intervene, adapt, and mitigate, rather than as a one-time act of explanation or blame. The argument that accountability is relational is not new. Cooper et al. (2022), for instance, argue that accountability should be understood not only as a moral category, centered on blameworthiness, but also as a practice embedded within social and institutional structures. Yet this raises a further question: what kinds of social structures should these be? This is precisely where our witches and diplomats step onto the stage, perhaps unexpectedly for the reader, but what if it is through such figures that alternative forms of accountability might be imagined? Witches set the conditions for outcomes they cannot fully predict, and diplomats sustain the fragile agreements that allow those conditions to hold. Responsibility here is necessarily murky: even if one is responsible in a moral sense for setting events in motion, they can never guarantee, or fully foresee, the exact results.
Another way to approach this is through the concept of traps, understood as configurations of technical and epistemic arrangements deliberately designed to attract and capture particular kinds of agents (Corsín Jiménez 2018). What interests us here is their productive dimension: traps do not merely ensnare; they also provide a space in which our thinking and living are hosted, shaped, and sustained. In this sense, traps are not only instruments of momentary capture or violence, but mechanisms that generate and condition environments. This perspective opens the way to think of rituals, too, as agents of environmentalization (Corsín Jiménez 2018), devices that shape the very conditions in which action, thought, and relation unfold.
Ritual, as structured, repeated, symbolically charged actions, can be seen as conducive environments. In an era where data-driven systems operate as opaque forces, organizational governance should borrow from the logic of rituals, which acknowledge uncertainty and opacity, rather than the fantasy of total control. Rituals do not guarantee outcomes, but they cultivate the conditions for less harmful, more predictable ones. Much like traps in anthropological accounts, rituals are environmentalizing devices: they shape the habitat in which algorithmic forces act, embedding constraints, rhythms, and checkpoints into the organizational landscape. By making these rituals visible, repeatable, and collectively owned, we can transform responsibility from an abstract principle into an enacted, lived practice that anticipates the very opacities it cannot eliminate.
The stories we tell, and how we tell them, matter (Haraway 2016]). This also applies to the world of algorithms, as Moss and Schüür (2018) have shown, where competing modes of myth-making shape expectations, vendor strategies, and organizational practices. So, the story we tell here is one of witches and oracles, but also of diplomacy and liminal zones. This is the challenge the article poses and leaves as a next step, a provocation, or food for thought: Can this perspective be translated into concrete, actionable steps within organizations? Ritualistic–algorithmic governance would not necessarily appear as incense, robes, and chanting, but rather as recurring, codified practices that organizations adopt to manage forces they can neither fully see nor control. In such a context, procedures are repeated not simply because they are efficient, but because they symbolically reaffirm a sense of control and continuity amid opacity. Publicly rehearsed commitments to transparency, for example, could take the form of regular, ceremonial releases of system performance metrics, events which would be less about the precision of the data and more about enacting the organization’s ongoing “care” for the system it tends.
Similarly, periodic audits might function as a kind of seasonal rite: a predictable moment when internal and external actors gather to “read” the signs in the model’s outputs, interpret anomalies, and decide on course corrections. These audits are not unlike agricultural festivals, marking the rhythms of planting and harvest, only here the harvest is insight into how the system behaves. Even internal drills, in which teams simulate failures or unexpected shifts in algorithmic behavior, echo protective charms: by enacting the scenario, they seek to contain harm before it manifests, reaffirming the group’s readiness. Over time, these practices could become part of the organizational environment, shaping how people think, act, and feel about the systems they operate.
In an algorithmic society, rituals could serve a similar purpose than in witchcraft: They don’t guarantee control over AI/ML systems (black-box opacity remains), but they can establish a predictable framework for interacting with unpredictable forces and they signal to the public that care is being exercised. If the rhetoric of policies implies rational mastery of cause and effect, rituals admit we cannot fully know, yet still we can commit to structured, communal acts of stewardship. In that sense, they preserve humility in the face of forces beyond full comprehension. And yes, they can feel “anachronic” in a high-tech context, but that’s precisely their value: they slow things down, insert moments of deliberate repetition and symbolic meaning into a culture obsessed with speed and novelty.
As the field of recommender systems grew and researchers proposed metrics to evaluate their systems’ performance, an error metric called root mean square error (RMSE) became its paradigmatic measure. The basic idea is simple: a recommender system predicts how users will rate items, and it is judged by how accurate its predictions were.7 This metric – easily computed, simply understood – soon dominated the field. By the time Netflix awarded its prize, the predictive paradigm, centered around RMSE, was already faltering, and the company never implemented its prizewinning algorithm. The winner was, as Netflix engineers often noted in their conference presentations, unwieldy, complex, and computationally intensive, having been hyper-engineered to reduce RMSE at any cost. But, more significantly, Netflix’s business interests had changed: when the contest began, it was a DVD rental company, mailing discs to its customers’ homes; by the end, it was a video streaming service, playing on-demand in users’ web browsers. Where the goal of recommendation had once been to accurately represent the future, it was now to keep users streaming, retaining them as paying subscribers.
