Loading [Contrib]/a11y/accessibility-menu.js

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Skip to main content
EPIC
EPIC Proceedings
  • Menu
  • Articles
    • Case Studies
    • Keynotes
    • Papers
    • Special Sessions
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • search
  • RSS feed (opens a modal with a link to feed)

RSS Feed

Enter the URL below into your favorite RSS reader.

https://proceedings.epicpeople.org/feed
P-ISSN 1559-890X
E-ISSN 1559-8918
Special Sessions
Vol. 2025, Issue 1, 2025January 19, 2026 PDT

AI Redefining Societies

Antti Rannisto, Tuukka Lehtiniemi,
automationpublic goodresponsible AIsphere transgressionvalues
Copyright Logoccby-4.0 • https://doi.org/10.1111/epic.70022
EPIC Proceedings
Rannisto, Antti, and Tuukka Lehtiniemi. 2026. “AI Redefining Societies.” EPIC Proceedings 2025 (1): 193–96. https://doi.org/10.1111/epic.70022.

View more stats

Abstract

This session explored the challenges and tensions involved in automating societies, especially public services, government, and civil society. Inspired by Tamar Sharon’s concept of sphere transgressions, we explored the way digitization and AI technologies are permeating all societal spheres and institutions, championed as solutions to virtually every social problem. This brings the very specific values of technology design and development into other domains, where they challenge other values, logics, practices, and types of expertise. Participants used the sphere transgression framework, case studies, and their own experiences and expertise to illuminate societal effects of AI in public life. Where and how is AI creating positive synergy with public values, and where does it threaten the intelligences of different social spaces, practices, and systems?

AI Redefining Societies was a facilitated group session that took place at EPIC2025 on Tuesday, September 16, 2025, and Thursday, September 18, 2025, at EPIC2025, Aalto University, Espoo, Finland.

Conceptual Framing

Tamar Sharon’s (2023) theory of sphere transgressions introduces the idea of a distinct sphere of the digital, whose characteristic values – efficiency, standardisation, control, optimisation, and convenience – are increasingly encroaching on other societal spheres. The digital sphere’s growing influence, led by actors possessing technical and digital expertise, risks crowding out the foundational values of sectors like healthcare, education, and law, and creating new dependencies on private actors for the provision of public goods.

As Stevens, Kraaijeveld, and Sharon (2024) suggest, an approach rooted in societal spheres, their values, and their internal regulatory principles has distinct strengths compared to established means to analyse the expansion of AI, and digitalisation more generally, in society. Privacy and data protection, for example, are extremely important themes when analysing digital developments in, say, healthcare or education. However, even if privacy were protected, this does not protect from fundamental changes due to new values or logic brought in by digitalisation. A similar argument can be made regarding other well-explored approaches, such as those starting from political economy and platform power. Such approaches underline market dominance, which can indeed help tech firms encroach on education, health, and so on. Yet market dominance does not exhaustively describe the effects of digitalisation in terms of organising work or the production of goods and services, and those effects crowd out traditional sectoral values and principles.

Following Sharon (2023) and Stevens, Kraaijeveld, and Sharon (2024), this approach allows us to ask: Do values and expertise imported by digitalisation clash with domain-specific values or expertise? How does digitalisation redefine or reshape the nature and aims of a sphere? The group discussed and reflected on these questions based on examples from public service media and two case studies grounded on research presented by the salon conveners.

AI-Based Risk Prediction in Social Work

The first case study concerned AI tools used for data-based risk predictions in the sphere of social work (Lehtiniemi 2024). Social work is an anticipatory practice involving assessment of future safety and risk, and therefore appears as a suitable site for machine learning applications that predict risk. The sphere approach however prompts the question: What values and practices are imported into social work alongside AI-based risk prediction? Does the AI tool reshape something fundamental in that sphere?

While both human caseworkers and AI applications perform risk prediction, AI introduces its own anticipatory practices. These are the anticipatory practices of machine learning and predictive analytics, which conceptualize clients as bundles of features describing their past life events and assume that similar constellations of features will lead to similar future outcomes. In contrast, the anticipatory practices embedded in social work rely on client interaction and human judgment, viewing clients as social beings situated in their life contexts and capable of changing the course of their lives regardless of past events.

Data Work in Finnish Prisons

The second case study concerned data work performed as part of prison labor in Finnish prisons (Ruckenstein and Lehtiniemi 2025). We discussed how recent critical attention to the conditions and outcomes of data labor globally can invite us to frame this as a case of exploitation and harmful effects for prison inmates. Yet here, too, the sphere framework allows us to ask more specific questions about digital technologies in the prison. What does data work do in, and to, the prison sphere? Does it threaten, reshape, or displace the sphere’s values or principles? Do tech firms somehow capture the prison to serve their own ends?

Some of the key values involved in the prison sphere are rehabilitation and normality. Rehabilitative aims inform practical decisions about who should (voluntarily) participate in data work. Normality suggests that conditions in prison should remain as close as possible to those outside – and as the outside world digitalises, so must the prison, and prison labor. Interpreted through the sphere framework, the case is not so much about the digital encroaching on the prison. Instead, the Finnish prison system appropriates data work for its own purposes, in alignment with the prison sphere’s values.

Developing a Public Service Algorithm

In addition to the two case studies, the salon briefly discussed Finland’s public service media company Yle’s (2025) development of a “public service algorithm”, designed to present content in ways that are meaningful, diverse, and transparent, balancing personalization with serendipity, and giving users insight into and control over how their data influences recommendations. The case represents another instance of pushback against sphere-transgressive logic, incorporating technological development into the spheric values of a public service media company.

Reflections and Discussion

The case studies prompted vivid discussion on the conditions under which the values of a societal sphere can guide the building of digital tools and services. Participants found the “sphere” perspective generative for analysing how digital technologies carry their own values, practices, and assumptions into institutional settings. The discussion underscored the importance of contextual analysis – understanding what technologies do within specific domains and whether they sustain or reconfigure existing values.

A key takeaway was the practical question of purpose: is the aim of digitalisation to change a sphere with digital tools, or to support something that already exists within the sphere? At the very least, the two should be clearly separated. One participant noted: “For my work as researcher and design lead at [a technology company], I believe I’ll apply the concepts of spheres and the importance of considering how the values of one sphere (namely the digital) can encroach and ‘transgress’ upon the values of another sphere. This will inform my role as an advocate for more human-centric and ethical decision making, especially as my work focuses more and more on product strategy research.”

The salon demonstrated that analysing AI through the lens of societal spheres offers a valuable way to foreground plural values and institutional diversity amid digital transformation. The framework encourages sensitivity to how AI technologies are not only technical artefacts but also vehicles of normative change, potentially sustaining, reshaping, or eroding the value systems of different societal domains.

References

Lehtiniemi, T. 2024. “Contextual Social Valences for Artificial Intelligence: Anticipation That Matters in Social Work.” Information, Communication & Society 27 (6): 1110–25. https:/​/​doi.org/​10.1080/​1369118x.2023.2234987.
Google Scholar
Ruckenstein, M., and T. Lehtiniemi. 2025. “Friction and Promise in Data Labor.” Science, Technology, & Human Values. https:/​/​doi.org/​10.1177/​01622439251358900.
Google Scholar
Sharon, T. 2023. “Towards a Theory of Justice for the Digital Age: In Defense of Sphere and Value Pluralism.” Radboud University. https:/​/​repository.ubn.ru.nl/​handle/​2066/​300467.
Stevens, M., S. R. Kraaijeveld, and T. Sharon. 2024. “Sphere Transgressions: Reflecting on the Risks of Big Tech Expansionism.” Information, Communication & Society 27 (15): 2587–99. https:/​/​doi.org/​10.1080/​1369118x.2024.2353782.
Google Scholar
Yle. 2025. “Yle Is Developing a Public Service Algorithm to Ensure the Relevance and Transparency of Its Services – Yle’s Press Releases.” Yleisradio Oy. May 23, 2025. https:/​/​yle.fi/​aihe/​a/​20-10008750.

Attachments

Powered by Scholastica, the modern academic journal management system