Introduction
Organisations create value by structuring and coordinating their members’ actions through shared rules, routines and practices, becoming repositories and carriers of useful knowledge in the process. Economic theories hold that the bigger an organisation gets the more it can do. An organisation can achieve this by growing its boundary (toward monopoly) or its network of interactions (in markets, or via contracting, etc). Since the mid-20th century, the adoption and use of ICT, including software-as-a-service (SaaS), has enabled organisations to grow and perform better. However, the scope and scale of organisations today are still broadly the same as organisations of the past; their legal, administrative and cultural logic has changed little, and neither has the way they carry and use knowledge. Our research is investigating what we call ‘artificial organisational intelligence’; a prospective state of organisational evolution in the digital era that includes the ability to talk to an organisation rather than its representative, and for organisations to talk to other organisations (in what is termed ‘knowledge networks’ (Zargham and Ben-Meir 2023). The core focus of this work is understanding systems and processes aimed at making organisation-level knowledge legible to machines and humans. Only then can we begin to theorise what artificial intelligence means for organisations. This paper examines ethnography’s role within Automated Organisational Intelligence, not only as a methodological lens for studying the emergence of these machine-readable, organisation-level knowledge systems, but also as an active practice of regulating AI, which occurs within organisations that achieve this state.
The empirical component of our work has commenced with a collaborative project involving academic researchers at the Centre for Automated Decision-Making and Society (ADM+S) and two industry partners: an engineering services firm called BlockScience, and a research not-for-profit network called Metagov. The partners are building, exploring and experimenting with a software protocol – called Knowledge Organisation Infrastructure (KOI) – for groups to organise, network, and share knowledge without centralising control into a unified third-party system. This paper focuses on the methods and tools we are developing for the project, how ethnographic practice can be deployed within organisational knowledge processes that involve AI, and what this means for us as ethnographers.
KOI was initially designed (by BlockScience) to structure organisational knowledge across disparate sources of information (Sisson et al. 2024). A consequence of KOI is that organisations can make their knowledge, rules, and other practice-based signals legible to both human and machine agents. Our use of the words ‘artificial’ and ‘intelligence’ is not suggesting the creation of a super intelligent machine to replace organisations, but rather that use of AI is altering human and machine participation in knowledge production, enabling new forms of legibility over organisational practices that were previously opaque or informal.
This capacity not only redefines what firms offer (towards a new ‘theory of the firm’) but also necessitates significant methodological innovations in ethnographic practice. Ethnography in the context of KOI becomes a recursive, participatory practice, where observation, interpretation, and inscription intersect with organisational knowledge processes. Our initial explorations suggest there is a ‘use’ for ethnography in artificial organisational intelligence that goes beyond human-in-the-loop (systems where human oversight is retained over automated functions), to something more akin to ‘building the loop’. By this we mean the deliberate assembling of knowledge pipelines through which to organise context, governance and interpretability of organisational intelligence, making the loop itself a co-governed system.
Methods: Tool-Building in Our Own Ethnographic Practice
Our approach is best described as a design-oriented, participatory ethnography in which the research instruments and tools have been co-developed with collaborators and field participants. The project to date can be divided into three iterative technical phases: 1) the development and deployment of tools for automating consent processes and collecting approved posts in a database (Rennie et al. 2022); 2) Development and use of a plug-in that enables ethnographers to pull consented posts into local vaults, attach field notes, tags, and links; 3) Further development of the plugin to send annotated notes back into an organisation, and enabling participants to view additions and changes (including deletions) in real-time. As described below, the three phases evolved our method from crowd-curated data capture through human interpretation, to the construction of possible human-AI feedback loops that keep observation, analysis, and inscription visible and revisable to both researchers and research participants.
Ethical considerations have been crucial to informing the infrastructural choices we have made. Decisions about how consent is obtained, what data becomes visible, and who participates in interpretation are all enacted within the system’s architecture as described below. The participant consent journey demonstrates how policies and procedures formed and managed outside of an organisation can be handled and processed as rules to be automated within an organisation’s own boundaries, without imposing external technical systems into that process. Such dynamics have caused us to review and reconsider the use and practice of ethnography conducted in the context of organisations and AI.
Theories and Practices of the Firm
There are many theories of why firms[1] exist, each foregrounding distinct coordination mechanisms. Penrose (1959) describes firms as repositories of productive knowledge, enabling the strategic management and recombination of resources into new products. Coase (1937) highlights that internal hierarchies within firms reduce the search, bargaining, and enforcement costs inherent to spot markets. Alchian and Demsetz (1972) point to firms’ capacities for incentives, measurement, and monitoring to overcome collective action and shirking problems. Hart (1995) emphasises firms as collections of incomplete contracts, structured to mitigate principal-agent conflicts through residual control rights. Each of these theories shows how firms translate dispersed knowledge into coordinated action, whether through managerial cognition, hierarchical control, monitoring systems, or contractual arrangements. Nelson and Winter’s evolutionary theory reframes the firm not as a hierarchy of contracts but as a population of routines: “regular and predictable patterns of activity” that carry organisational know-how across time (Nelson and Winter 1982). These routines are partly tacit; employees enact them without full articulation, and management selects among them based on performance feedback. Crucially, most routines remain opaque to outsiders and only weakly documented internally. Yet, a firm’s distinction lies in this somewhat cryptic stock of operational knowledge.
If the evolutionary frame tells us why routines matter, three decades of STS and practice research show us how they materialise in everyday work. For instance, Lucy Suchman (1987, 2007) demonstrates how what appears as systematic procedure in formal manuals manifests on the shop floor as situated improvisations coordinated through talk, gestures, and artefacts. Julian Orr’s ethnography of Xerox technicians (1996) highlights informal knowledge-sharing practices, such as storytelling, that preserve critical diagnostic expertise absent from official databases.
In many organisations, these practices are enacted and mediated through digital infrastructures – knowledge management platforms, chat systems, software project management workflows – that encode routines and make tacit knowledge visible and actionable. Star and Griesemer (1989) introduced the influential concept of boundary objects, which they define as artefacts flexible enough to traverse different professional contexts yet sufficiently stable to maintain shared meaning. Classification infrastructures further illuminate how metadata, filing systems, and repair logs invisibly shape organisational memory and action possibilities (Bowker and Star 1999). Practice theory underscores how routines emerge through dynamic interactions among documents, software, conversations, and tacit cues, highlighting competence as relational and embedded rather than individually held. Schatzki’s (1996, 2002) site ontology frames organisations explicitly as dense complexes of practices, sayings, and material artefacts that collectively structure meaningful action.
The combined insights from these fields clarifies not only why firms exist but also how they operationalise knowledge. However, the introduction of artificial intelligence (AI) now provokes a new, critical question: how does AI impact organisational knowledge that transcends individual cognition, and what implications does this hold for the firm’s fundamental rationale? Currently, most enterprise AI applications (forecasting models, generative co-pilots etc.) are narrowly focused, drawing on data that is held by firms but not necessarily connecting the processes, practices and rules that produce organisational knowledge. How might organisational knowledge in its entirety become legible to human and machine actors to enhance productive capacity? Can AI contribute to the legibility and operationalisation of organisational knowledge without displacing the nuanced, situated forms that make it valuable? We describe this scenario as Artificial Organisational Intelligence (AOI), whereby firms develop systems that render organisational knowledge continuously accessible, governable and actionable by both humans and machines.
We are particularly interested in how KOI is implemented within distributed organisations that lack traditional hierarchical structures, thus facing particular knowledge coordination challenges. Historically, large firms have enjoyed competitive advantages due to lower internal coordination costs despite bureaucratic inefficiencies. KOI’s innovation lies in reducing the costs of inter-organisational coordination through shared knowledge networks (Zargham and Ben-Meir 2023). This approach potentially enables smaller organisations to federate effectively, enhancing innovation without forcing homogenisation into unified databases or centralised ontologies.
In the following sections, we provide a detailed overview of KOI’s architecture and describe our ethnographic experimentation with these emerging organisational forms.
Knowledge Organisation Infrastructure (KOI)
KOI is the software equivalent to duct tape in that it can be used to connect disparate information systems and knowledge sources residing across platforms and applications. As knowledge emerges from the ability to continuously organise and reorganise information to link current actions with anticipated future outcomes (Sisson and Ben-Meir 2024) this duct tape-like purpose is significant for how we work with AI. In addition, organisations can implement bespoke rules and processes within their nodes, setting explicit policies regarding knowledge sharing and governance.
At the core of KOI are Reference Identifiers (RIDs), a protocol analogous to library call-numbers, which systematically label and reference diverse “knowledge objects”, such as documents, spreadsheets, or chat messages. Importantly, RIDs allow anyone who has the identifier to request access to the original object without the identifier itself revealing its content directly. KOI nodes manage these identifiers, maintaining their accuracy and updating them as the underlying knowledge objects evolve.
Nodes within a KOI network exchange three types of simple yet critical signals – “new”, “update”, or “forget” (collectively known as FUN) – thereby constructing an interconnected KOI-net. Within a KOI-net, participants can reference, discuss, and access knowledge that has been shared, regardless of the knowledge objects’ physical or digital location, organisational affiliation, or format. This approach enables collaboration without forcing any organisation to relinquish control of their data or adopt a unified ontology[2]. Crucially, the KOI protocol standardises communication among nodes but does not dictate internal node behaviour. Rather, KOI facilitates mutual understanding and integration through a flexible, decentralised standard for knowledge referencing.
Artificial Organisational Intelligence (AOI), as we conceptualise it, relies fundamentally on this capacity to render organisational knowledge legible and actionable without dissolving boundaries (including but not restricted to governance rules as per Star). In KOI’s case, when Large Language Models (LLMs) are used to query the RID-based knowledge graph, they do so within such boundaries[3], including instructions as to how knowledge can be used, who it should be accessible to etc. This creates dynamic feedback loops that enhance organisational responsiveness and adaptability. Thus, KOI shifts organisational knowledge management from static repositories to dynamic, governed, reflexive knowledge graphs that both humans and machines can actively navigate and refine. As Orion Reed from BlockScience observed during testing: “[T]he real challenge for intelligent-seeming systems is not in the raw power of the LLM, but the modularity and complementarity of other systems, governance affordances, and the ways in which things are organised. Which comes back to objects as reference, RIDs, identity relations, and other work which moves towards a more malleable, bottom-up infrastructure for organising (and therefore making use of) our digitally represented knowledge” (Orion Reed, Slack message, KMS-GPT 210224 testing reflexivity, 2024).
This evolution toward AOI does not only digitise existing organisational knowledge; it transforms practice-arrangement complexes (per Schatzki) into dynamic, machine-readable networks. KOI effectively re-materialises organisational practices, making them explicit, queryable, and adaptable in real-time by both human and non-human agents. The practical and theoretical implications of KOI suggest a possible reconceptualisation of organisational capabilities, in which firms can orchestrate a continuously evolving, reflexively governed knowledge network, aligning human and machine agency toward common objectives more effectively than traditional market mechanisms or standalone AI solutions. The “regulability” of AI in this context occurs as a localised, group process for specifying where and how AI systems are connected into existing systems, as well as the ability to make collective decisions, including boundary-setting (permissioning, etc.). We call this “building the loop”.
As Jake Goldenfein (2024) observes in relation to “human in the loop” in legal and regulatory domains, the human decision-maker often functions as a conceptual shortcut or last line of defence that upholds the tenants of legal authority while eliding “questions around the different ways humans are threaded into decision systems and attendant attributions of responsibility” (p. 21). Empirical studies show that human oversight rarely corrects algorithmic bias and often functions as a performative check that neither ensures better decisions nor redistributes responsibility meaningfully. Goldenfein’s answer is that we need to instead focus on “the complex redistributions of responsibility that trace the distribution of decision-making elements into automated systems and the authority to specify them, including the influence of companies that consult, design, and build relevant software” (pp. 21-22).
In our approach, the question is not whether a human is “in the loop” at the moment of decision, but how loops are built (how organisations construct systems for contextualising knowledge, setting permissions, and shaping the flow of meaning across human and machine actors). This is not about divine moral authority of the human, but about the governing that occurs through organisational forms that structure conduct, orient attention, and produce intelligible, actionable knowledge within regimes of accountability and control (Rose 1999). While not an answer to questions of liability or regulation, it does open the door to possibilities for organisational-level approaches (including shared rules) in the mitigation of AI harms.
Telescope: From Data Collection Bot to KOI-net Sensor
Our use of KOI for ethnographic practice came about through need; transforming a stand-alone tool for collecting raw data into a pipeline for sending and receiving associated contextual information collected and/or created by ethnographers (i.e. “processed data”). We originally created this tool, called Telescope, to address challenges encountered by digital ethnographers researching decentralised online communities, particularly within blockchain governance contexts. Traditional ethnographic methods involving direct observation and manual notetaking often struggle to scale effectively in digital environments where participants frequently enter and exit discussions, remain pseudonymous, and distribute organisational knowledge across numerous platforms (e.g., Telegram, Slack, GitHub, blogs, voting platforms, and blockchain transactions). Moreover, passive monitoring and logging of conversations raise ethical concerns regarding consent and participants’ awareness of the research process.
With Telescope, participants in a Discord or Slack community can explicitly flag messages relevant to research via the telescope emoji reaction (🔭). Upon detecting this reaction, the Telescope bot automatically initiates a consent workflow by privately messaging the original author, informing them of the research, and requesting permission to include their message in the dataset. With the author’s consent, Telescope will archive and display the selected content in a dedicated channel (and for export to other data files), creating a transparent, participatory record of evolving community insights.
Early deployments of Telescope in communities like SourceCred (see Rennie 2023) revealed its strengths in participatory data curation but also its limitations. Individual messages flagged by Telescope often required additional contextualisation and linkage to broader themes and events to be fully interpretable and useful for ethnographic analysis. To address these shortcomings, we integrated Telescope into KOI. In the KOI-enhanced implementation, Telescope now functions as a sensor node within a larger network of interconnected tools designed to facilitate ethnographic research. Approved Telescope messages are ingested into Obsidian vaults through a specialised plugin called “KOI Sync”, enabling ethnographers to add critical metadata, context, and additional notes directly linked to the original content. These annotated Telescope notes can be further integrated into structured knowledge maps alongside other research materials such as readings, web clips, and meeting notes.
Moreover, KOI integration expands the participatory and feedback dimensions of our initial Telescope digital ethnography approach. Ethnographers can share their annotated observations back into the KOI network, specifying permissions and pathways for dissemination. Community members, other researchers, and even AI agents may then access and engage with these notes in real-time. For instance, an organisation might query annotated ethnographic observations through an LLM chatbot, allowing stakeholders immediate insight into ongoing research findings. Similarly, an AI agent could subscribe to live KOI updates, receiving continuous context streams that may better inform decision-making, budget allocation, or thematic analyses.
The ‘forget’ action, in particular, means that a participant can withdraw permission for use of a particular piece of data and the system will update accordingly. For instance, shortly after approving a Slack post for inclusion in an ethnographic dataset, a Metagov contributor decided to withdraw their consent. They did so by returning to the consent request previously sent to them by the Telescope bot via direct message in Slack and selecting the ‘Revoke’ option. This triggered a delete event, propagated via the post’s RID, which removed it from the ethnographer’s Obsidian vault through the plugin’s syncing function. Although the post itself was deleted, its RID persisted, thereby preserving the surrounding context. The history of the deletion (not its content) remained queryable, providing an auditable record that a boundary had been renegotiated. Meanwhile, the message could remain in Metagov’s Slack as an input into the group discussion as intended. The episode demonstrated that KOI’s ethics layer is not a static consent form but a live governance surface: participants can continually redraw what counts as data, and the infrastructure enforces those decisions in real time.
A key concern in our work is on how rules that have been expressed and required from an actor outside of the immediate field site are dealt with in distributed automated systems. The participant consent processes required by university ethics committees have been a useful object in this regard; automating consent and permissioning through technical processes has enabled us to observe how an ‘offline’ process and boundary can be managed and made legible within the KOI system. The ‘forget’ function means that a participant can withdraw specific inputs from the dataset after they have opted into the project yet have them remain within the group’s discussion, effectively allowing data to be accessed by some and not others (humans or machines). The Telescope tool processes who opted in, anonymised handles (when requested), timestamps, and any subsequent “forget” requests, while KOI automatically propagates updates and deletions across nodes using the RID system. Similarly, an ethnographer can specify whether a note is available to a colleague, the community or an LLM. The project has thus taken an applied approach to the boundaries of ethnography itself and how these can be ‘seen’ within systems and become part of the active management, expression and enforcement of organisational intelligence. In this way, Telescope’s integration into KOI maintains human oversight and ensures ethical compliance as well as actively constructing organisational context – bridging raw conversational data into organisational knowledge through ethnographic practice in ways that can be called, re-interpreted and connected.
Ethnography as Building the Loop: Observing, Interpreting, and Inscribing
The integration of Telescope and KOI into ethnographic practice prompts the question: is this still ethnography? Ethnography is traditionally understood as comprising three core activities – observation, interpretation, and inscription (Dourish 2014) – each of which is altered by our approach. Ethnographers inherently influence the fields they study; our practice of “building the loop” makes this influence explicitly infrastructural. In the following section we detail the role of the ethnographer in building and structuring information feedback loops towards AOI.
Observation
Marilyn Strathern described ethnography as “the deliberate attempt to generate more data than the investigator is aware of at the time of collection” (2004, p. 5), underscoring the inherently participatory and interpretative nature of the practice (see also Dourish 2014). Traditionally, this meant prolonged reflection and reinterpretation after data collection. However, with Telescope and KOI, consented Slack posts and field observations are embedded within a live knowledge graph, each annotated with a Reference-Identifier (RID). Observation thus extends beyond the sole ethnographer, becoming a collaborative, networked activity, potentially augmented and enriched by AI agents that may reinterpret and recontextualise data as the organisational landscape evolves. The ethnographer (or group of ethnographers) nonetheless plays an important role in configuring the system that records, filters, and signals what counts as meaning (e.g. tagging, annotating, setting context in KOI). The ethnographer therefore takes part in what we call ‘building the loop’.
Anthropologists have long framed their engagement with computational infrastructures, including artificial intelligence, as a form of participation in the feedback loops of these systems. In such contexts, ethnographic observation does not remain external but becomes part of the system’s own informational flows: data about the system becomes data for the system. First-order cybernetics conceptualised observation as external and objective, focused on regulating machine operations from outside (Beer 1995). By extension, second-order cybernetics emphasises the observer’s inclusion within the system, recognising the reflexivity and positionality of observation (concepts long familiar to ethnography. See Mead 1968). Similarly, within KOI systems, the ethnographer becomes a node within the network alongside infrastructural sensors such as Slack, Discord, and other data streams. Rather than solely documenting activity, the ethnographer actively senses, interprets, provides context, corrects, and feeds annotated information back into a socio-technical system that adapts to these inputs. In practice, this involves capturing consented posts in real time, annotating podcasts and social media content, using tools such as Obsidian for synchronised note-taking, and contributing curated field observations to networked knowledge graphs where human and AI agents co-produce insights. This approach extends beyond conventional presence in a bounded field site to a sustained, iterative mode of sensing distributed across multiple ethnographers, systems, and channels.
Interpretation
Interpretation in this setting potentially becomes recursive and multi-agent rather than retrospective and human-only (or mostly human). Analytical sense-making is performed not only by ethnographers but also through queries to large language models (LLMs) or automated agents processing evolving data streams. In such a context, oversampling of data is intentional, leaving subsequent interpretive work to a diverse set of human and machine actors. Importantly, the knowledge boundaries and rules for data use remain explicitly visible and co-created. As Munk, Olesen, and Jacomy (2022) argue in their work on ethnography and AI, ethnography shifts from a mode of explanation to one of explication, a process where even “machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use” (p. 1). While it is more accurate to say that machine learning algorithms can explain what they do, just not necessarily in a language that is legible to us, the point on explication versus explanation is important. In this case, the ethnographer’s role expands to facilitate and curate the collaborative explication process, while retaining clarity around rules and boundaries.
This is not a shift away from thick description, but toward making it machine legible and queryable, especially for AI agents to compute over. Research participants are able to tag research data to signal importance to a research topic or question of interest. The ethnographer becomes a context-librarian in which fieldnotes, metadata classification, permissioning, and the curation of semantic linkages across disparate inputs are core to fieldwork. The resulting context and boundaries become part of the means through which organisational intelligence is generated.
Inscription
Traditionally, ethnography culminates in a monograph authored by a researcher. Ken Anderson et al. (2019) argue that when studying AI systems, it is insufficient to narrate solely from a human perspective. In building the loop, inscription still occurs, but now as a networked, collaborative, and often machine-readable process. KOI and Telescope transform inscription into a multi-authored activity, where ethnographic authority emerges from the ongoing circulation, connection, and interaction of knowledge objects within the network. As a result, a fieldnote may become an input to a semantic map, a prompt for an LLM, or a knowledge object with a RID, whilst still carrying the trace of ethnographic labour – observation, interpretation, curation. Rather than producing a single, fixed narrative, ethnographic inscription therefore becomes a living, queryable mesh of data traces. These traces can be traversed, contested, and augmented by both humans and algorithmic agents. Through this approach, the “thickness” of ethnographic description is enhanced by the evolving connections and interactions captured in the knowledge graph.
Our ongoing experiments will continue to observe the extent to which KOI and Telescope enable ethnography that is a collaborative and continually evolving practice, in which observation, interpretation, and inscription can involve active participation from both human communities and artificial agents.
Conclusion
Artificial organisational intelligence, as we conceive it, is not the use of AI over particular data within an organisation. Furthermore, it is not the replacement of organisations by AI. AOI is making the knowledge of organisations, including tacit and practice/process components, machine readable so that we may talk to an organisation (not just a representative of an organisation). In this scenario, organisations may also talk to each other through AI. The BlockScience-developed Knowledge Organisation Infrastructure is an example of an architecture that makes this possible by enabling people to connect and govern over otherwise disparate sites of knowledge – in the process making the boundaries and practices of the organisation machine-readable.
The implications of AOI for ethnographic practice are threefold. First, in terms of temporality, data collection, analysis, and output are no longer staged sequentially. A fieldnote can immediately sync to a dataset, which can then be interpreted, fed back into an organisation to enhance legibility of the dataset and enacted on. Second, in terms of scale, the ethnographer is no longer bounded by what they can physically perceive or manually track and parse. The coordination of sensors (e.g., Telescope), curators (e.g., ethnographers or participants), and machine agents (e.g., KOI-connected LLMs and other data products) creates a distributed attention infrastructure that augments the ethnographer’s reach and responsiveness across complex, vast field sites. It also demands a new literacy in infrastructure maintenance, context prioritisation, and feedback navigation, as well as keen awareness of maintaining a high degree of ethics within these evolving practices and tools. Third, in terms of epistemology, “the field” is not discovered but enacted through socio-technical affordances and boundary creation. The context set by the ethnographer (along with participants using KOI-networks and aided by bots) reconstitute the field as an emergent mesh of relations, permissions, and interpretive acts. The ethnographer’s authority no longer rests in solitary immersion or narrative synthesis but in their capacity to configure and maintain intelligibility within digitally networked architectures.
As John Law writes, “The world is not a singular thing. It is not simply complex. It is also multiple. More than one – but less than many. And social science, by trying to represent and order that world, often makes it more singular than it actually is” (2004, p. 6). Our approach to ethnography outlined here is not just “participatory”, it is “more than one”. Building the loop, as we have described it, is an ethnographic practice adapted to conditions of human-machine hybridity that can contribute to Artificial Organisational Intelligence. Ethnography in this scenario retains its commitment to immersion, reflexivity, contextualisation, and ethical engagement. Where it differs is that the ethnographer co-produces interpretation with moderators, stewards, and AI agents and may do so in ways that are more immediately accessible to those involved. Participatory ethnography is extended to include machines, not as tools of extraction but as collaborators in interpretation where the rules and permissions for how that occurs can be collectively governed. This is a move from “human in the-loop” to “human building the loop”: designing systems that are not only ethically aware but epistemically generative. The field becomes a system that ethnographers not only describe but continually rewire.
About the Authors
Ellie Rennie is an Associate Investigator of the ARC Centre of Excellence for Automated Decision-Making and Society and a Principal Research Fellow at RMIT University. Her research is examining permissionless systems and on-chain communities using ethnographic methods, including validator governance, contribution systems and infrastructures for the collective governance of knowledge.
Kelsie Nabben is an ethnographic researcher specialising in the social impacts of emerging technologies, particularly decentralised digital infrastructure (including blockchains, peer-to-peer protocols, and Decentralised Autonomous Organisations) and other algorithmic systems (such as Large-Language-Models).
Michael Zargham is the founder and CEO of BlockScience. Dr. Zargham holds a Ph.D. in systems engineering from the University of Pennsylvania where he studied optimization and control of decentralized networks. Dr. Zargham has designed data driven decision systems and built a data science team for a Media Technology firm, worked on the mathematical specifications of blockchain enabled software systems with a focus on observability and controllability of the information state of the networks.
Jason Potts is a Professor of Economics, specializing in innovation of institutions and technology. He is a Professor at Alfaisal University and a Research Affiliate at MIT.
Brooke Ann Coco is a PhD candidate at RMIT University and the ARC Centre of Excellence for Automated Decision-Making and Society. Through ethnographic inquiry, her work examines how governance is embedded in and enacted through digital infrastructures, with a focus on knowledge organisation, decentralised systems, and the politics of technology.
Luke Miller is a software developer at BlockScience and Metagov where he works on the KOI protocol. He also leads the technical development of the Telescope ethnography tool at Metagov.
Matthew Green is a Research Assistant at RMIT University who has been assisting in building Obisidian vaults and workflows for ethnographic research.
Acknowledgements
Research for this paper was funded by the Australian Research Council-funded Centre of Excellence for Automated Decision-Making and Society (“The Use of Automated Knowledge in Society” project). Work on Telescope and Obsidian ethnography tools was funded through the Australian Research Council-funded Cooperation through Code Future Fellowship project (FT190100372). Note on authorship: Luke Miller and Matthew Green contributed to this paper through the development of the systems and tools discussed (as opposed to the written text), which are integral to the research. We also acknowledge input and feedback from others involved in the KOI project, especially David Sisson (BlockScience) and Elianna DeSota (Metagov).
Economists refer to “firms” whereas we use the term “organisations” as the latter can encompass a wider array of group formations.
Ontologisation refers to the process of formally specifying or systematising entities, concepts, and their relationships within a domain – typically in a way that enables computational representation and reasoning.
With some caveats depending on the model. For instance, if ChatGPT is connected via an API to a KOI node with a RAG system (as was the case with Metagov’s early implementation) then it will still produce responses that are influenced by the OpenAI’s training, rules etc. The point here is that any model can be used, including ones that have been developed by/for the organisation.
