We live in an era where nearly every facet of human interaction is captured, quantified, and analyzed in real-time. This is datafication — the transformation of social actions into digital data, driving everything from personalized ads to predictive AI (Van Dijck, 2014). It’s not just a feature of modern society; it has become its backbone. We’re operating within what some have termed an “algorithmic turn,” where the pervasive role of algorithms is reshaping our cultural, social, and economic dynamics (Abiteboul & Dowek, 2020). The question we face now is: what does agency look like in this increasingly data-driven, agent-driven world?
As we hurtle toward the development of Artificial General Intelligence (AGI), a system capable of understanding, learning, and performing many of the intellectual and creative tasks a human can, questions of agency become even more pressing.
The Evolving Concept of Agency
In the social sciences, "agency" refers to the locus of action — where power to act resides (Denzin, 2014). Whether it's within individuals, organizations, or even machines, the definition of agency varies across disciplines. Some, like Foucault (1970), argue that agency emerges through discursive practices, while others suggest that machines are merely “objectifications” of human intent (Knoblauch, 2020).
But today, this debate has evolved into something more complex. Can machines possess their own agency, or are they just tools that reflect human will? Social phenomenology suggests that humans often perceive machines as intentional, even though machines merely act based on algorithms and data patterns. This creates an interesting paradox: while we build machines to serve us, they often seem to act autonomously, influencing our behavior in subtle ways (Gunkel, 2020).
“Agency […] is not inherent; it is permitted”
— Henrickson, 2018: 7
As AGI looms closer, this paradox intensifies. If machines gain the ability to perform tasks across a vast array of domains, indistinguishable from human thought, how do we negotiate control? Is AGI simply an extension of our own capabilities, or does it represent a new form of agency, independent from us?
A New Kind of Agency: The Rise of Hybrid Control
Scholars like Latour (2007) and Gunkel (2020) have suggested that agency doesn’t belong solely to humans or machines, but is distributed between them. In this view, agency emerges in the “in-between” — in the relationships and interactions between people and the technology they use. This hybrid agency redefines our understanding of power dynamics, complicating simplistic notions of who (or what) is in control.
Take the rise of AI agents, for example. They’re meant to assist us with various tasks, from scheduling meetings to helping us file our taxes. But while they simplify our lives, they also raise questions about the hidden power dynamics at play. Who really has the agency here? The user? The machine? Or the company that designs and controls the technology?
As AGI progresses, these relationships will only become more entangled. An AGI system that can process immense amounts of personal data, perform complex tasks, and even make independent decisions will redefine the boundaries of agency, forcing us to rethink the roles of humans, machines, and the corporations behind them.
Access Control as a Lever
Consistent and pervasive access controls, embedded in nearly every digital experience, mean that we never truly access content in isolation. Each interaction — whether browsing social media, sharing a photo, or interacting with AI agents — is mediated by layers of authorization and authentication.
These layers create ongoing, often invisible, relationships not just with the digital environments we engage with, but also with the entities that manage, own, and profit from them. This is something we’re tackling head-on at koodos with DataMover and Shelf.
The Privacy Paradox
We often hear that people are concerned about their online privacy — but the truth is that most people don’t.
A part of the reason is that the average person, when faced with the complexity of protecting their data, is overwhelmed. Setting up and managing personal data stores — systems designed to help individuals control and protect their digital information — requires technical know-how. Users must import their data, assign permissions, and continuously monitor for security risks. For most, this process feels like a burden, creating a barrier to entry. This is something we’re exploring more deeply as part of our joint residency with IDEO.
Because of this, consumer value propositions focused purely on ownership and control of personal data struggle to gain traction. While the need for privacy is acknowledged, it lacks the urgency in practice unless it’s tied to a clear utility that impacts daily life.
Utility-Driven Agency
To move beyond the privacy paradox, we need to rethink the role of agency over personal data. Instead of framing it as an abstract right or responsibility, agency must be tied to direct utility — to something that makes life simpler, more enjoyable, or more efficient. When the value of controlling data is tied to real-world benefits, users are more likely to engage.
A great example of this shift is seen in design patterns that focus on user-friendly, intuitive data control. Apple’s approach to data permissioning, for instance, exemplifies this trend. When a user connects a new app to their photo library, they aren’t forced into an all-or-nothing decision. Instead, they can selectively grant access to portions of their library, and they are regularly reminded of this permission, with the option to revoke or adjust it later.
At koodos, we’ve built Shelf — a platform that lets individuals track their digital consumption and curate a public showcase of their taste. By using Shelf, users can aggregate their consumption data in one accessible place. We have been studying the technological shifts around personal data storage and compute — from local-first approaches to private cloud compute — but the challenge isn’t just technical. It's about enabling meaningful engagement with that data, empowering users to reflect on and share their tastes as a means to re-aggregate users’ data in a personal data store that they can control access to over time.
We intend to offer a balance of control and utility that empowers users in meaningful ways, rather than simply overwhelming them with abstract concepts of privacy.
Reclaiming Agency in a Datafied Society
The rapid rise of AI agents, which need access to our private data to perform tasks on our behalf, only amplifies the need for individuals to have centralized control over their data. These AI-driven systems demand a level of access that raises new questions about agency — not just who owns the data, but who has the power to act on it.
Yet, as much as we talk about giving users control, the truth is that even the most sophisticated AI agents don’t function in isolation. They are still shaped and constrained by the companies that build them and the ecosystems in which they operate. As consumers, we are simultaneously actors and acted upon, entangled in a web of relationships that we rarely perceive in full.
As machines become more autonomous, we must grapple with the reality that agency is not a binary, but a spectrum. It’s not simply a question of whether humans or machines are in control; it’s about how we share that control in an increasingly intertwined relationship.
The real challenge, then, is not just about designing systems that give individuals control over their data. It’s about recognizing and navigating the hidden layers of agency that exist between humans, machines, and the corporations that mediate these interactions. The future of agency lies not in choosing between human or machine control, but in understanding how these forces intersect and interact in our everyday lives.
In a world where the lines between human and machine agency are increasingly blurred, we intend to be pioneers in helping users navigate the complexities of a datafied society with tools designed for real utility, control, and empowerment. This is the future we are building — one where agency isn’t just reclaimed, but redefined. We’re excited to be publishing a collaborative paper that discusses the future of the data economy soon.
Van Dijck, J. (2014). Datafication, Dataism, and Dataveillance: Big Data between Scientific Paradigm and Ideology. Surveillance & Society, 12(2), 197-208.
Abiteboul, S., & Dowek, G. (2020). The Algorithmic Society: Technology, Power, and Knowledge. Cambridge University Press.
Hallinan, D., & Striphas, T. (2016). Algorithmic Cultures: Essays on Meaning, Performance, and New Technologies. Routledge.
Schuilenburg, M., & Peeters, R. (2021). The Algorithmic Society: Power, Knowledge, and the Future of Governance. Routledge.
Foucault, M. (1970). The Order of Things: An Archaeology of the Human Sciences. Routledge.
Giddens, A. (1984). The Constitution of Society: Outline of the Theory of Structuration. University of California Press.
Ricaurte, P. (2022). Algorithmic Cultures and Datafication: A Critical Reflection. New Media & Society, 24(4), 732-748.
Knoblauch, H. (2020). Communicative Constructions of Reality: Semiotic Mediation and Society. Springer.
Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press.
Gunkel, D. J. (2020). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press.
Henrickson, L. (2018). The Entanglements of Agency: Philosophical Reflections on AI, Autonomy, and Human-Machine Relations. The Journal of AI Research, 45(7), 1-20.
Rammert, W., & Schulz-Schaeffer, I. (2002). Technological Action: Agency and Normativity in Human-Technology Relations. Cambridge University Press.
Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press.
Coeckelbergh, M., & Gunkel, D. J. (2023). The Philosophy of Artificial Intelligence: Contemporary Debates. Oxford University Press.