Note on title [1].
Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes “algorithms” beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations”[2] is often taken for granted. At the same time, they are “invoked as powerful entities that control, govern, sort, regulate, and shape everything from financial trades to news media”[3]. Recently (May 16-17, 2013), an interdisciplinary event organised at New York University has addressed this issue through an interesting lens: that of governance – governance by algorithms in addition to governance of algorithms.
Taking stock of the event, which this author attended, the article seeks to contribute to the discussion of “what algorithms do” and in which ways they are artefacts of governance, providing two illustrative examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. Indeed, the question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
The omnipresence of data, the consequences of their organisation
The role of invisibility in the classification processes that order human interaction, the procedures through which categories are made and kept invisible, the ways in which people can change this invisibility when necessary, and the extent to which systems of classification are crucial to the building of information infrastructures have been core preoccupations of science, technology and society scholars for several years.[4] Yet, the issue of information classification and organisation has perhaps never been as relevant as in our current times of “information overload”[5] and internet-mediated access to the vast majority of the information surrounding us.[6] Indeed, digital data seem to proliferate in the complex world of today, building on the variety of platforms and supports that allow for dematerialisation and rapid circulation and distribution. They serve different purposes, from trading to surveillance, from evaluation to recommendation; they are listed, regrouped and organised by means of many supports and devices, from search engines to e-commerce websites. While companies leverage traces left by consumers on the web so as to better target, customise (and take advantage of) their next purchases and interactions, some users worry about the portraits that such traces allow others to paint of them, and of the impossibility to modify or erase them, left to the perusal of generations to come[7].
Arguing that we are currently entering in the era of big data and algorithms, several authors argue that this “is a major breakthrough in the development of digital services (as it) gives decisive importance not only to the owners of data, but also and especially to those who can make them intelligible”[8]. The algorithms subtending the information and communication technologies we daily use, the internet first and foremost, are (also) artefacts of governance, arrangements of power and “politics by other means”[9].
The power of algorithms
By naming a conference held at New York University last May “Governing Algorithms”, its organisers were making a deliberate choice of ambiguity – hinting at both the governance of algorithms, the extent to which political regulation can affect the functioning of the instructions and procedures subtending technology, and the governing power of algorithms themselves.
The ways in which the pervasiveness of algorithms into human society has political implications appear as a core issue of our times; they are a key feature of both today’s information ecosystem[10] and underlying cultural norms[11], as they contribute to the shaping of the information we access and of its organisation. In a recent paper, communication scholar Tarleton Gillespie highlights six dimensions of political valence for algorithms that have public relevance, i.e., those algorithms that are used to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions”[12]. These six dimensions are:
-
patterns of inclusion, the choices behind the constitution of an index, what is included and excluded in it, and how data is “prepared” for the algorithm;
-
cycles of anticipation, the consequences of attempts, by those creating the algorithms, to have information about their users and make predictions on their future behaviours;
-
the evaluation of relevance, the criteria by which algorithms determine what is not only relevant, but appropriate and legitimate;
-
the promise of objectivity, the way the technical nature of the algorithm is presented as a guarantee of impartiality, particularly in the case of controversy;
-
the entanglement with practice, the processes by which users reshape their practices to suit the algorithms they depend on, and turn algorithms into terrains for political contest;
-
finally, the production of calculated publics, the process of algorithmic presentation of publics back to themselves, and how this shapes a public’s sense of itself.[13]
These six dimensions bring to the fore two main consequences of the “computation” of our information society. By delegating to algorithms a number of tasks that would be impossible to perform manually, the process of submitting data to analysis is automated; and in turn, the results of these analyses automate decision-making. This double automation, in turn, poses the question of agency and control[14]. By asking questions such as: who are the arbiters of algorithms? Is algorithm design an assertion of authority over more than the algorithm itself? What is the autonomy of algorithms, if any? – it is the accountability and the responsibility of algorithms as socio-technical artefacts that is examined, that of their creators and users, and ultimately, of the balance of power facilitated or caused by algorithms.
Algorithmic governance: Part I. Web search
The ways in which the web gives more visibility to some information and content than to other is at the very heart of the recurring debate on the defining features of the digital space as a “public space.” According to Jürgen Habermas, the “father” of the public sphere concept, two conditions are necessary to structure a public space: freedom of expression, and discussion as a force of integration. The architecture of the “network of networks” seems to articulate these two conditions. However, if the first is frequently recognised as one of the widespread virtues of the internet, the second seems more uncertain[15]. In his book The Wealth of Networks, legal scholar Yochai Benkler argues for a global “order” intrinsic to the web, whose core feature is the fact that the selection of information is no longer the monopoly of gatekeepers, journalists, librarians and editors, but is delegated to internet users, now publishers in their own right. By citing and quoting one another in conversational niches, these individuals and groups single out quality information for algorithms, which, in turn, order and classify them and make them available in search engines.[16] Thus, the ordering of web-hosted information appears as a co-production and co-construction of internet users and computational tools.
Algorithms are delegating the integration of conversations and discussions taking place at the micro level. The aggregated arguments that result from this integration are perceived as “implicit universal consensus”; they have both the strengths and the weaknesses of any information that cannot be traced back to any specific individual, and at the same time, results from a wide assemblage of opinions.[17] Search engines, and the multiple measures underlying the internet hierarchise the visibility of information by proposing it at the very beginning of search result lists, or dissimulating it at the end. By de facto deciding “what must be seen,” they are susceptible to encourage or discourage controversy and discussion – while constructing the public agenda of political and social priorities in the process, as well as selecting interlocutors that matter.[18]
In particular, thanks to the current quasi-monopoly that Google holds on web search practices, its PageRank algorithm has been widely examined as the new gatekeeper[19] and “benevolent dictator”[20] of the digital public spaces and spheres. The algorithm implements, according to a “recipe” that partly remains an industrial secret, different sets of measurement criteria that assess authority (according to the number of citations), audience (according to the number of visits or clicks), proximity and affinity (according to recommendations) or speed (according to real-time aggregation and relay of “hot” topics). PageRank, as the “master switch” of the internet,[21] centralises and organises the circulation of information in the network of networks, and for every search interrogation and request, arbitrates on what’s important and relevant.
Algorithmic governance: Part 2: Recommendations in e-commerce[22]
For some years now, online seller Amazon has been “a remarkable prescriber”, whose prescriptions are based on the recommendations of its readers/buyers. The vendor’s website makes it possible for each of its subscribed users to know, in a single click, about other purchases made in the past by users who have acquired the same title[23]. Personalised recommendations are not something new in the world of book publishing and selling, be they digital or not. Simply, a librarian remarks ironically, they historically have been “the exclusive purview of booksellers, librarians… and friends. Now your best friend for advice on reading is called ‘recommendation Al Gorithm’… and it loves you very much!”[24]
Indeed, it is on the systematisation and automation of a very widespread and very social phenomenon – the exchange of advice and guidance among users, sharing preferences and affinities – that Amazon and other online sellers base their recommendation systems. Drawing from methods based on both content (considering two books “similar” if they share a large number of words) and collaborative filtering (the intersection of lists containing particular books and lists based on previous records of books purchased or borrowed by readers), Amazon has developed an algorithm called “item-to-item collaborative filtering”. Its details remain an industrial secret, but the algorithm displays every day its effectiveness in “personalising” recommendations according to the interests of each of its consumers. As its name suggests, rather than match a user with similar users, this algorithm relates each item ordered and purchased by users with similar items, and eventually combines them in a recommendation list.[25]
Behind this algorithm – and causing readers/buyers to think that Amazon knows very well, perhaps too well, their tastes – lie years of research and experiments in a recent subfield of computer science whose practical applications are increasingly widespread, albeit discrete: data mining, in particular affinity analysis and market basket analysis.[26] For readers looking for new things to read, suggestions similar to their previously purchased articles are constructed by relying on a mix of several sources of information about them, feeding a large database where they are combined with other shopping histories. This information can range from the most obvious demographics about oneself and close relatives, to more complex assessments based on the sites one consults before arriving at Amazon, or one’s “habit of clicks.” The entanglements within this large database about the purchasing behaviour of users, activated in accordance with Amazon’s patented algorithm, are the basis of the suggestions familiar to the user, such as “Recommended as you bought…” or “Recommended because you add X to your wish…”, and influence book purchases on Amazon every day.
Algorithms and rules, rule by algorithm
We live in an increasingly algorithmic world. This article has examined, in particular, two cases related to web-based information and communication technologies where the importance of algorithms is high and their presence pervasive. However, the invisible computational structures that guide our search results and our online purchases extend to a number of other contexts, in which algorithms are deployed and regulatory work has been insistently called for in face of recent crises, from facial recognition software to financial markets.[27]
The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, and more generally, of the governance of the complex, automated systems that permeate today’s world.
The academic landscape in the interdisciplinary fields of communication studies, internet studies and science and technology studies reflects a thriving and increasing interest for this question. As an additional path towards answering the key question, “who does the algorithm serve?”, scholars also investigate the historical process from which the algorithm has emerged as a key topic of our times and attempt to situate it in the larger context of political economy.[28]
As not only academic research but current news show ever more frequently,[29] two faces of the algorithms/rules relationship are currently under scrutiny, and are likely to be even more in the close future. On the one hand, there is the issue of institutions’ ruling of algorithms. Should the locus of legal reasoning related to these systems shift to the coding of algorithms? Should regulation, or further regulation, of algorithms be pushed or advocated for in specific contexts? What would this regulation look like, would it even be possible, and what effects would it cause?[30]
On the other hand, the extent to which we live in a world ruled by algorithms has to be assessed. We need to research not only the extent to which, given the ubiquity of algorithms, they regulate us in a sense, but also “what it would mean to resist them”.[31]
References
[1] This article is partially a recollection and account of the Governing Algorithms conference held at New York University on May 16-17, 2013.
[2] Gillespie, Tarleton (2013). “The Relevance of Algorithms”. Forthcoming in Media Technologies: Essays on Communication, Materiality, and Society, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/1-paper-gilles…
[3] http://governingalgorithms.org/
[4] Bowker, Geoffrey C. & Susan Leigh Star (1999). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: The MIT Press.
[5] Flew, Terry (2008) New Media: An Introduction (3rd Ed.). Oxford : Oxford University Press.
[6] Cardon, Dominique (2013). “Présentation”. Dossier Politique des algorithmes, Réseaux, 177 (1): 9-21.
[7] On the Internet’s “persistent memory” and the so-called “right to be forgotten”, championed by the EU in the recent past, see e.g. Beckles, C.-A. (2013). “Will the Right to Be Forgotten Lead to a Society That Was Forgotten?”, Privacy Perspectives, https://www.privacyassociation.org/privacy_perspectives/post/will_the_ri… or the critical Harris, L. (2013). “How to fix the EU’s ‘Right to be Forgotten’”, The Huffington Post, http://www.guardian.co.uk/technology/series/internet-privacy-the-right-t…
[8] Cardon (2013), p. 10.
[9] Latour, Bruno (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press , p. 229.
[10] Anderson, C. W. (2011). “Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience”. Journal of Communication, 5: 529-547.
[11] Striphas, Ted (2009). The Late Age of Print: Everyday Book Culture from Consumerism to Control. New York, NY: Columbia University Press.
[12] Gillespie (2013), p. 2.
[13] Ibid., pp. 2-3.
[14] Barocas, Solon, Sophie Hood & Malte Ziewitz (2013). “Governing Algorithms: A Provocation Piece”. Discussion Paper for the Governing Algorithms conference, NYU, May 16-17, 2013. Available at SSRN: http://ssrn.com/abstract=2245322 or http://dx.doi.org/10.2139/ssrn.2245322
[15] Cardon (2013), p. 11.
[16] Benkler, Yochai (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press. (pp. 33-35).
[17] Geiger, Stuart (2009). “Does Habermas Understand the Internet? The Algorithmic Construction of the Blogo/Public Sphere”. Gnovis: A Journal of Communication, Culture and Technology, 1 (10).
[18] Cardon (2013), p. 11.
[19] Smith, Dan (2013). “Google: Gatekeeper of the Internet’s Grey Area”. The Telegraph, June 10, 2013. Available at http://www.telegraph.co.uk/sponsored/technology/technology-trends/101039…
[20] Masnick, M. (2008). “Google As Benevolent Dictator: The Gatekeeper and the Data Collector”. TechDirt, December 2008. Available at http://www.techdirt.com/articles/20081201/0119292980.shtml
[21] Wu, Tim (2010). The Master Switch: The Rise and Fall of Information Empires. Random House Digital, pp. 279-280.
[22] This section is partly based on an article I wrote in French in March 2012: Musiani, Francesca (2012). “‘Bienvenue sur votre Amazon’: les systèmes de recommandation d’ouvrages”, Labs Hadopi. Available at http://labs.hadopi.fr/actualites/bienvenue-sur-votre-amazon-les-systemes…
[23] Benhamou, Françoise (2012). “3e étape de la stratégie verticale d’Amazon”. Blog L’Eco(nomie) des Livres, October 24, 2012. Available at http://www.livreshebdo.fr/weblog/l-eco%28nomie%29-des-livres-24/776.aspx
[24] Lemaire, Alexandre (2011). “Madame Machine, pouvez-vous me conseiller un bon livre? Les nouveaux outils Web de recommandation de lectures”. Association des Bibilothécaires de France, June 27, 2011. Available at http://bibliolab.fr/cms/content/les-nouveaux-outils-web-de-recommandation
[25] Linden, Greg, Brent Smith & Jeremy York (2003). “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”. IEEE Internet Computing, 7 (1): 76-80. Available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1167344&userType…
[26] http://en.wikipedia.org/wiki/Affinity_analysis
[27] Hardt, Moritz (2013). “Occupy Algorithms: Will Algorithms Serve The 99%?” Response Paper for the Governing Algorithms Conference, NYU, May 17, 2013. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/2-response-hardt.pdf
[28] Berry, David (2012). “The relevance of understanding code to international political economy”. International Politics, 49: 277–296.
[29] E.g. BBC News (2011). “Disappearing tycoon Souter blames Google”, September 12, 2011. Available at http://www.bbc.co.uk/news/technology-14884717
[30] Barocas, Hood & Ziewitz (2013), see supra note 14.
[31] Ibid.