From Agency-Enhancement Intentions to Profile-Based Optimisation Tools: What Is Lost in Translation
4 Pages Posted: 9 Sep 2018 Last revised: 19 Jun 2019 Publication Status: Under Review
Whether it be by increasing the accuracy of Web searches, educational interventions or policing, the level of personalisation that is made possible by increasingly sophisticated profiles promises to make our lives better. Why wander in the dark, making choices as important as that of our lifetime partner, based on the limited amount of information we humans may plausibly gather? The data collection technologies empowered by wearables and apps mean that machines can now read͛ many aspects of our quotidian lives. Combined with data mining techniques, these expanding datasets facilitate the discovery of statistically robust correlations between particular human traits and behaviors, which in turn allow for increasingly accurate profile-based optimization tools. Most of these tools proceed from a silent assumption: our imperfect grasp of limited data is at the root of most of what goes wrong in the decisions we make. Today, this grasp of data can be perfected in ways unimaginable even twenty years ago. The profile-based optimization tools built thanks to this data boon thus promise to lift us out of our murky meanderings: if precise algorithmic recommendations can replace the flawed heuristics that preside over most of our decisions, why think twice? The above line of argument often informs the widely-shared assumption that today's profile-based technologies are agency-enhancing, supposedly facilitating a fuller, richer realization of the selves we aspire to be.
This provocation not only questions this assumption; it also highlights the extent to which the agency-compromising, unintended effects of such technologies threaten the very resources that could be relied on to mitigate these agency-compromising aspects. Could it be the case that a hereto poorly acknowledged side effect of our profile-based systems (and the algorithmic forms of government they empower) is that it leaves us sheep-like, unable to mobilize a normative muscle that has gone limp? The more we are content to offload normative decisions to profile-based, optimized algorithms, the more atrophied our normative muscles would become. Considered at scale, the (endless) normative holidays that would result from such offloading would spell the end of agency (in spite of the noble, agency-enhancing intentions that drove their development).
Suggested Citation: Suggested Citation