Session Report
As we took stock of the notes that we had made on these two vibrant discussions, and our own recollections of the way the conversations unfolded, we realized that there are two different agents whose agency was in question that we need to differentiate in our summary. We, as customer researchers, had intended in framing the salon material to focus on the agencies of end users and other sorts of customers for whom we design products and services.
But in the discussions, our own agency as researchers came to the fore as an overwhelming object of focus. The discussion in the second session essentially kicked off with the question “who decides what is a good use of AI tools?” a question that yes, impacts customers of products and services, but primarily raised institutional politics and power to the foreground. Discussion proceeded with topics of technocracy, executive pressures to reduce headcount, and negotiations of expertise between AI systems and human experts like doctors, before coming back around to the questions that we had expected to focus on, of how to further the agency of users.
It seems to us that the in-group concerns that EPIC members have – worry about the agency and autonomy of their own positions – were so strong that they deeply shaped the discussion even when framed around end-user concerns. We cannot ignore this in-group aspect, but after providing general information about the discussion we want to distinguish these two foci to avoid under-emphasizing the customer dimension.
Agency Keywords
We defined agency as a type of “control,” and posited it as a key user value in a context of polycrisis. In our framing, agency is defined as the “capacity to act in a given environment” and implies both the ability to make choices, and power to have those choices matter. Participants raised a few questions about agency, including:
- Is agency as a term that anyone uses in corporate communication?
The group said: not really. We use other terms, for instance talking about “customer value,” and including control and self-determination as values. But collectively we did not expect agency to be legible term to executives.
- What is the relationship between agency and “culpability”?
Culpability is the condition of being responsible, or deserving blame, for something. Participants asked: Who is held accountable for decisions? Is this different from agency? We did not come to a conclusion about this, but it seems that the notion of agency is tied with risk or exposure to blame if things go wrong. The group identified that the relation between agency and proper culpability can be distorted (e.g., by nonhuman actors like AI, or by scapegoating particular humans).
- Is “AI” as the same kind of bubble experience as “big data”? Or not?
This keyword question is properly about “AI” not agency, but we include it here as a framing question. Multiple participants sought to understand how to apply lessons from the history of previous waves of change to research in this new environment. The key unanswered desire was to use this understanding to decide how much agency to grant to AI tools, based on how much they represent hype or substance.
Agency for Researchers
Three connected themes emerged from the discussion relating to in-group concerns about agency. First, we had direct discussion about how AI is applied in research and organizational settings, identifying negative aspects but also some positive opportunities that may be emerging, even counter to the plans of organizations. This neatly connects us to the second theme, about the high-level politics of AI applications inside organization. And finally, we touched on some more social scientific, ethnographic lenses for risks and benefits to agency of applied AI.
How AI is Applied in Organizations
In general, when asked, people in organizations want to keep their own expertise area, hand other things off to AI. But we asked: who will be able to take advantage of tools to do this? Those in previously privileged positions? Or others?
- Participants expressed that AI isn’t automating the reporting and dull tasks. As one said: “You still end up doing the Jira tickets yourself.”
Competing interests and agencies are involved in how AI is applied, and there are good and bad sides to this that arose in discussion. Historically speaking, Kaizen was a way of aligning worker agency to corporate goals in the service of process improvement. But the tenor of AI use in participants’ experience today feels less about this kind of mutual benefit, and more about obsoleting oneself. However, it’s not all bad news:
- One participant mentioned the gradual erosion of “20% time” at a big-tech firm, but that developers are applying agentic AI coding tools to claw some of this back.
The Politics of AI Application
A big question that bubbled up was: is anyone else (in organizations) talking about agency? Is anyone caring? We discussed a shared sense that execs feel pressure to achieve the mythical reductions in headcount, which might encourage application without critical thought. Participants reckoned that some AI application is being driven by this mindset, especially when “starting with shareholders.”
-
Depending on your industry, consider: Is AI application starting with shareholders, or user needs?
-
When evaluating applications, who decides what’s a good use?
Questions about AI politics brought in historical examples, and the theme of technocracy and technological determinism. One historian cautioned that this kind of top-down technocracy has happened before and analogized it to the Dandekar commission in India. Participants shared ideas about critical questions, and tried to identify who can be asking them: Are AI outputs high enough quality? Is it helping us to make good decisions? Some technical people are taking a critical view, but perhaps this is a role that falls to user researchers and ethnographers?
Ethnographic Lenses: Joy, Choice, Contestation, and Knowledge Building
The themes of fun and joy arose a few times, perhaps out of concern that AI is removing the fun from work. One way to think about agency that we considered as choice of experience that is specific and situated – in the analogy of driving a car: sometimes I want just to be safe; sometimes I want to have fun. So, agency in AI application could look like being able to shift “into and out of” the work, depending on what you need; having a choice of the experience of working. Fun may be one choice among many. But participants also asked: Can we make hard work fun? Or what makes work fun?
- Fun isn’t agency in and of itself, but this notion of fun perhaps arose because when we are enjoying our work we feel in control, and when we feel out of control we do not enjoy.
Another theme we discussed was contestation. AI can be applied to remove time consuming components of work, but if there is a conflict who is right? A negotiation of expertise is needed. In some domains, AI may be faster and more accurate the majority of the time. But there may be roles for humans in some cases. We identified this as the ability to argue with the system – one attendee said of this we “haven’t managed this in 40 years.”
- Contestation is not an AI problem; it’s an organizations and systems problem.
Knowledge Building was another professional theme. Participants worried about training the next generation of experts, particularly if entry level roles dry up and entry level work gets automated away.
- Agency in the long term is decreased if people don’t learn basic skills.
Agency for Users and Publics
When it comes to agency for users, discussion was less grounded and specific than it was for in-group concerns of expertise, with the exception of a few comments where experts and end-users crossed (for instance, AI tools for the pharmaceuticals industry where AI might displace highly paid professionals who are nevertheless not part of our own expertise community). The discussion of agency for end users focused instead on generalized strategies to foster agency, or questions around what agency in the use of a tool means. However, some of the same ethnographic concerns around contestation and embodied knowledge arose for end-users as well, just in more generic and less grounded ways.
Agency in Applications of AI
The group identified a few general interaction modalities to further agency, such as:
-
Asking the user questions
-
Step by step relationship building, including little tests of each other’s alignment
We discussed the analogy of the “cold start” problem as a potential cautionary tale: Typically, the cold start was a moment to hear from the user, and have the user express their interests. We should be cautious about removing these opportunities via data-based predictions and thus limiting users’ agency to reinvent themselves.
The concept of agency also brought up many tensions, such as:
-
Between paternalism and user agency
-
False agency and dark patterns
Gamification and nudges may help a user achieve behaviors they want to achieve, but can also distort experience. People feel played, or play games just because they are there without really aligning them to goals. Dark patterns maybe used to pervert agency, as one participant said “I’m empowered to give up my agency” in the space of online data collection because of the overwhelming work to refuse.
We discussed whether perceived capacity to have agency matters more than actual agency, in people’s actions. One reference point we discussed is the twelve step process for addiction recovery. The steps require agency to carry out, but first step is symbolically giving up one’s agency.
Ethnographic Lenses: Contestation and Knowledge Building
While many of the participants were wary of the impacts of new tools and technologies, we did identify that the use of something to do a task is not necessarily giving agency away. One does not need to know how to do everything on one’s own, and this is not a sensible standard to apply. But the “right to repair” or improve the system was seen as restorative to users’ agency, just as the ability to argue with the system was seen as restorative in the professional case.
- I may not know how to repair something, but having the right to repair it gives me agency.
To analogize the right to repair in the context of AI, we might insist upon: the right to go back and overturn a decision AI has made? The right to modify the system to fit me, without giving this data away to the AI provider?
Concerns about knowledge building and teaching also arose in the case of users. Participants worried about the creation of new knowledge, especially embodied knowledge, when so much focus is on the performance of language? In general, participants worried about cognitive offloading by young people. But others countered that GenAI has created new subcultures of creation and creativity.
As one participant mentioned: “Have you heard of Italian brain rot?” In other words, creativity is still alive and well even if image creation is in the hands of the image generator.
