Conceptual Framing
Tamar Sharon’s (2023) theory of sphere transgressions introduces the idea of a distinct sphere of the digital, whose characteristic values – efficiency, standardisation, control, optimisation, and convenience – are increasingly encroaching on other societal spheres. The digital sphere’s growing influence, led by actors possessing technical and digital expertise, risks crowding out the foundational values of sectors like healthcare, education, and law, and creating new dependencies on private actors for the provision of public goods.
As Stevens, Kraaijeveld, and Sharon (2024) suggest, an approach rooted in societal spheres, their values, and their internal regulatory principles has distinct strengths compared to established means to analyse the expansion of AI, and digitalisation more generally, in society. Privacy and data protection, for example, are extremely important themes when analysing digital developments in, say, healthcare or education. However, even if privacy were protected, this does not protect from fundamental changes due to new values or logic brought in by digitalisation. A similar argument can be made regarding other well-explored approaches, such as those starting from political economy and platform power. Such approaches underline market dominance, which can indeed help tech firms encroach on education, health, and so on. Yet market dominance does not exhaustively describe the effects of digitalisation in terms of organising work or the production of goods and services, and those effects crowd out traditional sectoral values and principles.
Following Sharon (2023) and Stevens, Kraaijeveld, and Sharon (2024), this approach allows us to ask: Do values and expertise imported by digitalisation clash with domain-specific values or expertise? How does digitalisation redefine or reshape the nature and aims of a sphere? The group discussed and reflected on these questions based on examples from public service media and two case studies grounded on research presented by the salon conveners.
AI-Based Risk Prediction in Social Work
The first case study concerned AI tools used for data-based risk predictions in the sphere of social work (Lehtiniemi 2024). Social work is an anticipatory practice involving assessment of future safety and risk, and therefore appears as a suitable site for machine learning applications that predict risk. The sphere approach however prompts the question: What values and practices are imported into social work alongside AI-based risk prediction? Does the AI tool reshape something fundamental in that sphere?
While both human caseworkers and AI applications perform risk prediction, AI introduces its own anticipatory practices. These are the anticipatory practices of machine learning and predictive analytics, which conceptualize clients as bundles of features describing their past life events and assume that similar constellations of features will lead to similar future outcomes. In contrast, the anticipatory practices embedded in social work rely on client interaction and human judgment, viewing clients as social beings situated in their life contexts and capable of changing the course of their lives regardless of past events.
Data Work in Finnish Prisons
The second case study concerned data work performed as part of prison labor in Finnish prisons (Ruckenstein and Lehtiniemi 2025). We discussed how recent critical attention to the conditions and outcomes of data labor globally can invite us to frame this as a case of exploitation and harmful effects for prison inmates. Yet here, too, the sphere framework allows us to ask more specific questions about digital technologies in the prison. What does data work do in, and to, the prison sphere? Does it threaten, reshape, or displace the sphere’s values or principles? Do tech firms somehow capture the prison to serve their own ends?
Some of the key values involved in the prison sphere are rehabilitation and normality. Rehabilitative aims inform practical decisions about who should (voluntarily) participate in data work. Normality suggests that conditions in prison should remain as close as possible to those outside – and as the outside world digitalises, so must the prison, and prison labor. Interpreted through the sphere framework, the case is not so much about the digital encroaching on the prison. Instead, the Finnish prison system appropriates data work for its own purposes, in alignment with the prison sphere’s values.
Developing a Public Service Algorithm
In addition to the two case studies, the salon briefly discussed Finland’s public service media company Yle’s (2025) development of a “public service algorithm”, designed to present content in ways that are meaningful, diverse, and transparent, balancing personalization with serendipity, and giving users insight into and control over how their data influences recommendations. The case represents another instance of pushback against sphere-transgressive logic, incorporating technological development into the spheric values of a public service media company.
Reflections and Discussion
The case studies prompted vivid discussion on the conditions under which the values of a societal sphere can guide the building of digital tools and services. Participants found the “sphere” perspective generative for analysing how digital technologies carry their own values, practices, and assumptions into institutional settings. The discussion underscored the importance of contextual analysis – understanding what technologies do within specific domains and whether they sustain or reconfigure existing values.
A key takeaway was the practical question of purpose: is the aim of digitalisation to change a sphere with digital tools, or to support something that already exists within the sphere? At the very least, the two should be clearly separated. One participant noted: “For my work as researcher and design lead at [a technology company], I believe I’ll apply the concepts of spheres and the importance of considering how the values of one sphere (namely the digital) can encroach and ‘transgress’ upon the values of another sphere. This will inform my role as an advocate for more human-centric and ethical decision making, especially as my work focuses more and more on product strategy research.”
The salon demonstrated that analysing AI through the lens of societal spheres offers a valuable way to foreground plural values and institutional diversity amid digital transformation. The framework encourages sensitivity to how AI technologies are not only technical artefacts but also vehicles of normative change, potentially sustaining, reshaping, or eroding the value systems of different societal domains.
