Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Facebook s’invite sur Tor

Un article intéressant paru dans Le Monde en novembre dernier:

Martin Untersinger écrit que “Facebook a annoncé la semaine dernière le lancement d’une version de son site Web accessible uniquement depuis le réseau TOR. Connectés à ce réseau, les internautes peuvent désormais se rendre sur Facebook en y accédant depuis l’adresse https://facebookcorewwwi.onion/.”

Lisez le reste de l’article sur le site du Monde.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Offre de bourse de thèse CIFRE ‘Droit et technologies’

Offre de Bourse de Thèse  CIFRE

2014-2017

Droit et technologies

 

Laboratoire de recherche (Centre de recherche et d’étude en sciences administratives et politiques) Unité mixte Université Paris 2  & CNRS, Axe Gouvernance , Droit & Technologies de l’information

recherche un(e) étudiant(e) intéressé(e) par une bourse de thèse Cifre en Droit (droit public ou droit privé).

La thèse portera sur un sujet totalement nouveau : les questions juridiques posées par l’introduction de la « maquette numérique » (Directive INSPIRE) dans le secteur du BTP (BIM). Concrètement, l’observation et l’analyse se feront à partir de plateformes de partage d’informations ( textes, images, plans, …) entre les différents acteurs d’un projet d’architecture et/ou de travaux publics.

L’encadrement et l’environnement de cette thèse seront assurés conjointement dans/par les services juridiques d’entreprises  du BTP et au CNRS y coopérant dans le cadre du projet national MINND.

Cette proposition s’adresse à des étudiants curieux, intéressés par des sujets innovants mais elle n’exige pas de connaissances techniques. Travail collectif et missions de recherche au niveau européen prévues

Recrutement : novembre 2014

Renseignements : 06 86 67 34 30 ou 01 42 34 58 80 (secrétariat CERSA)

Entretien sur CV et lettre de motivation : Mi-Octobre 2014

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Programme, ADAM Final Symposium

-1

 

 

 

« Reclaiming the Internet » with distributed architectures: rights, technologies, practices, innovation

Final Symposium of the ADAM project (October 2-3, 2014, MINES ParisTech, 60 boulevard Saint-Michel, 75006 Paris)

 

LOGO i3 arrondi vertlogo_mines_paristechTelecomParisTech_logo_200_01accueil_band1

 

 

 

 

Thursday, October 2

 

10-10:30am. Arrival of participants/Registration

 

10:30-10:45am. Introduction/Welcome: Cécile Méadel, Alexandre Mallard & Francesca Musiani (CSI MINES ParisTech)

 

10:45-11:30am. Keynote #1: Dominique Boullier (SciencesPo). Cosmopolitical network architectures

 

11:30am-1:30pm. Session #1: “Case Studies in Decentralization”

Nicolas Bertrand (Utopia/IRIT) & Julien Rabier (FFDN). Introducing a new framework for digital cinema transport: The DCP Bay

Nick Lambert & Benjamin Bollen (Maidsafe.net). The SAFE Network, a new, decentralized Internet

Jean-Christophe Plantin (University of Michigan/Université de Technologie de Compiègne). ‘Unicorns exist, but only in the Google office’: promises and perils of web data for research

Maya Bacache & Julia Cagé (Télécom ParisTech). Pair-à-Pair: les véritables enjeux économiques

Discussant: Ksenia Ermoshina (CSI MINES ParisTech)

 

1:30-2:30pm. Lunch (on site, provided)

 

2:45-3:30pm: Keynote #2: Niva Elkin-Koren (University of Haifa). Beyond Design: The Role of Law in Distributed Architectures

 

3:30-5:30pm. Session #2: “Decentralization: ‘Code is Law’ Revisited?”

Argyro Karanasiou (Bournemouth University). Law Encoded: Towards a Free Speech Policy Model Based on Decentralised Architecture

Mélanie Dulong de Rosnay (Institute for Communication Sciences, CNRS, & LSE). Peer-to peer as a design principle for law: distribute the law

Primavera De Filippi (CERSA CNRS & Berkman Center). Ethereum: the quest towards a decentralized social system – when ‘dry code’ meets ‘wet code’

Roberto Caso & Federica Giovanella (Università di Trento). Liability issues in Wireless Community Networks

Discussant: Danièle Bourcier (CERSA CNRS)

 

5:30-6:15pm: Keynote #3: Panayotis Antoniadis (ETH Zurich). Local networks for local interactions: four reasons why and one way forward

 

 

Friday, October 3

 

9:30-11:30am. Session #3: “Futures of Decentralization”

Paris Chrysos (ISC Paris, Mines-Télécom). Can the Internet become distributed again? The limits of network approaches

Christian Sandvig (University of Michigan), Paul N. Edwards (University of Michigan), Jean-Christophe Plantin (University of Michigan, Université de Technologie de Compiègne), Carl Lagoze (University of Michigan). Histories of future networks: Exit, voice and loyalty in alternative infrastructures

Jeffrey Andreoni (Nottingham Trent University). Digital Gerrymandering: how wireless communities will redraw social, political and geographic boundaries

Annie Gentès & François Huguet (Télécom ParisTech). Translocal devices-as-infrastructures-networks, alternative practices, tactical networks: toward resilient media or empowerment tools?

Discussant: Valérie Schafer (ISCC/CNRS)

 

11:30am-12:15pm. Keynote #4: Vincent Toubiana (CNIL). Is a decentralized Internet better for privacy?

 

12:15am-1:15pm. Lunch (on site, provided)

 

1:15-3:15pm. Session #4: “The Decentralization of Everything?”

Darryl Farber (Pennsylvania State University). Architecting Evolving Sociotechnical Interdependent Infrastructure Systems

Sarah Gold (Central Saint Martins). Alternet Rules

Graham Meikle (University of Westminster). Distributed Citizenship and Social Media

Harry Halpin (W3C/MIT) & Alexandre Monnin (INRIA). The Decentralization of Knowledge

Discussant: Eric Dagiral (Université Paris Descartes)

 

3:15-4pm. Keynote #5: Geert Lovink (HvA Institute of Network Cultures, Amsterdam, NL). Social Media Alternatives Before and After Snowden

 

4-4:15pm. Conclusions & Wrap-Up: Cécile Méadel, Alexandre Mallard & Francesca Musiani (CSI MINES ParisTech)

 

Attendance to the symposium is free of charge. Please inform the organizers of your presence (for one of the two days, or both), by writing to francesca (dot) musiani (at) gmail (dot) com.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Production entre pairs, argent et valeur

Le numéro 4 du Journal of Peer Production, sur le thème de l’argent et de la valeur, vient d’être publié. Il est coordonné par Nathaniel Tkacz, Nicolas Mendoza et moi même, et inclut deux contributions par des membres du projet ADAM. Alexandre Mallard, Cécile Méadel et moi explorons le rôle performatif de la connaissance experte dans la construction d’une “confiance distribuée” pour le système de monnaie électronique décentralisé Bitcoin, tandis que Primavera De Filippi, en collaboration avec Miguel Said Vieira de l’université de São Paulo, propose un système de licences plus adaptée à l’économie des communs.

La totalité du numéro est librement disponible ici. Quelques extraits de l’introduction:

“Peer production has often been described as a ‘third mode of production’, irreducible to State or market imperatives. The creation and organisation of peer projects allegedly take place without ‘managerial commands’ or ‘price signals’, without recourse to bureaucratic apparatuses or the logic of competitive markets. Instead, and mimicking the technical architectures upon which many peer projects are based, production is described as non-hierarchical and decentralised. Group dynamics are also commonly described as ‘flat’ and this is captured, of course, in the very notion of the ‘peer’. When tested against the realities of actual projects, however, such early conceptions of peer production are, at best, in need of further elaboration and qualification. At worst, they were always off the mark. Hierarchies persist in peer production, as does competition and market-like arrangements. But perhaps it is the qualities of these new hierarchies and competitive forms that is novel. After all, liberal democracies, dictatorships, corporations, local sports clubs, and families all have their hierarchies but none is reducible to the others.

In the context of earlier understandings of peer production, the question of value and even more of currency has been rather marginal. This issue of the Journal of Peer Production (JoPP) demonstrates that theories and practices of value and currency are moving into the foreground. There has been a veritable explosion of experiments with currency and also a continuing metrics creep in many peer projects and beyond. More fundamentally, though, the question of value and how it circulates through a collective body is central to any mature theory of social organisation. In sociological and economic thought, the historical distinction between ‘values’ and ‘value’ split the non- or at least less-easily-calculable with the seemingly cold and objective world of calculation and universal commensurability. This ‘old settlement’, which never really held, nevertheless helped demarcate the economic from the social. But the intensification and extension of computational processes, manifested most clearly in the rise of big data, has lead to a proliferation of bottom-up procedures to formalise (social) values, rendering them easily calculable and lending order to the decentralised world of peers, but without necessarily replicating capitalistic calculations of value. […]

In this issue we seek to advance the exploration and understanding of how the themes of value and currency intersect peer production. This objective presented a double challenge for the contributors and for us as editors. Indeed, the scholarly articles included in this issue have attempted to provide analytical and theoretically grounded investigations of a world that is, on the one hand, often developing more quickly than the academic publication process can account for in a timely way, and on the other hand, mostly shaped by expert-practitioners. At the same time, these contributions seek to engage not only with scholars of related issues within the academic community, but also with practitioners themselves — who, on their end, have demonstrated a strong interest in this dialogue, as the invited comments section shows.”

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Call for Papers, Final Symposium of the ADAM project

Call for Papers, Final Symposium of the ADAM project

October 2-3, 2014, MINES ParisTech (Paris, France)

 cropped-logo-blog-adam2.jpg

« Reclaiming the Internet » with distributed architectures:

rights, technologies, practices, innovation

The research program ADAM (Distributed Architectures and Multimedia Applications, adam.hypotheses.org) (1) studies the technical, political, social, socio-cultural and legal implications of distributed network architectures. This term indicates a type of network bearing several features: a network made of multiple computing units, capable to achieve its objective by sharing resources and tasks, able to tolerate the failure of individual nodes and thus not subjected to single points of failure, and able to scale flexibly. Beyond this simplified operational definition, the choice, by developers and engineers of Internet-based services, to develop these architectures instead of today’s widespread centralized models, has several implications for the daily use of online services and for the rights of Internet users.

The final symposium of the ADAM project, open to disciplines as varied as science and technology studies, information and communication sciences, economics, law and network engineering, aims at investigating these implications in terms of a central issue. With the increasingly evident centralization of the Internet and the surveillance excesses it appears to allow, what are the place and the role of the (re-) decentralisation of networks’ technical architectures – at a time when infringements upon privacy and pervasive surveillance practices are often embedded in these architectures? Are distribution and decentralization of network architectures the ways, as Philippe Aigrain suggests (2), to “reclaim” Internet services – instruments of ‘technical governance’ able to reconnect with the original organisation of cyberspace?

Papers presented at the symposium may focus on one or more of the following four axes, although we welcome proposals that do not fully subscribe to them.

  • Back to the origins”? Past and present of distributed architectures. The initial Internet model called for a decentralised and symmetrical organisation – in terms of bandwidth usage, but also of contacts, user relations and machine-to-machine communication. In the 1990s, the commercial explosion of the Internet brings about important changes, exposing the shortcomings – for the network’s usability and its very functioning – of a model presupposing the active cooperation of all network members. Today, in a world of Internet services where fluxes and data converge towards a few giants, experimentations with distributed architectures are seen as a “return to the origins”. But is it really about the dominance of an organizational principle at different times in history – or is there a co-existence of different levels of resource centralisation, hierarchy of powers, and cooperation among Internet users over time? Are we indeed witnessing a “war of the worlds” of which the recent tensions around surveillance are the most recent illustration?
  • (Re-) decentralisation, a sustainable alternative for the Internet ‘ecology’? The technical features of distributed architectures (direct connections, resistance to failure) and their ability to support the emergence of organisational, social and legal principles (privacy, security, recognition of rights) offer new paths of exploration and preservation of the Internet’s balance. At the same time, the road towards decentralisation is far from linear. The users behind the P2P nodes can assemble in collectives that are very varied in nature, complexity and underlying motivations. This variety may be dependent upon modes of aggregation, visibility devices, types of communication tools and envisaged business models (as well as the difficulty of identifying sustainable ones). Having programmed the infrastructures with the idea that the most part of users’ online activities consist in downloading data and information from clusters of servers, network access providers raise economic objections to P2P models. Finally, developers-turned-entrepreneurs themselves often need to revisit the choice of decentralisation, because of unexpected user practices, the impossibility of making distributed technology “easy” for the public, or the seductive simplicity of centralized infrastructures and economic models.
  • Decentralisation and distribution of skills, rights, control. How does distributed architecture redefine user skills, rights, capacities to control? How can law support user practices and their diversity, instead of countering them? The decentralisation of Internet services raises several issues at the crossroads of law and technology. What are the differences if compared to centralized architectures, non-modifiable by users, where data are stored on clusters of servers exclusively controlled by service providers? From the viewpoint of user empowerment, what are the consequences of introducing encryption, file fragmentation, sharing of disk space in the technical architecture? While “first-generation” P2P networks have affected copyright first and foremost, decentralised Internet-based service prompt us to investigate issues like the redefinition of notions such as creator and distributor, the responsibility of technical intermediaries, the ‘embeddedness’ of law into technical devices.
  • What are the communicational models at stake in decentralised infrastructures and architectures? Distributed Internet services have transformed and transform today the ways in which actors make sense of their communicational capacities and their responsibilities in information sharing. User empowerment, prompted by several P2P services – increasingly mobile, self-configurable and flexible – open innovative perspectives for infrastructures of communication, their functions and their mediation capacities among actors. In what ways does this evolution transform data and communication channels? What are the representations of the values subtending these architectures and the relations among their participants, vis-à-vis other Internet services, but also within the spheres of conception, discussion and circulation of these objects? What are the new forms of contribution and what do they enable in terms of pedagogical practices and shared literacies? Finally, in which ways do distributed infrastructures relate to the notion of ‘informational common good’?

We invite paper proposals in French and/or English, in the form of a 500 to 800 word abstract sent to the address francesca.musiani@mines-paristech.fr. Key dates:

  • Deadline for the sending of abstracts: May 15, 2014 (Please note that the submission deadline for abstracts has been extended to June 5, 2014)
  • Notifications of acceptance sent by the Program Committee: June 6, 2014
  • Deadline for full papers: September 15, 2014
  • ADAM final symposium: October 2-3, 2014

We envisage a collective publication originating from the conference and are looking into different possibilities (edited book or special issue of a journal).

ADAM Project Team and Program Committee of the Symposium

Maya Bacache, Département SES, Télécom ParisTech

Danièle Bourcier, CERSA, CNRS

Primavera De Filippi, CERSA, CNRS

Isabelle Demeure, INFRES, Télécom ParisTech

Mélanie Dulong de Rosnay, ISCC, CNRS

Annie Gentès, CoDesign Lab, Télécom ParisTech

François Huguet, CoDesign Lab, Télécom ParisTech

Alexandre Mallard, CSI, MINES ParisTech

Cécile Méadel, CSI, MINES ParisTech

Francesca Musiani, CSI, MINES ParisTech

(1)  Funded by the French National Agency for Research (ANR), CONTINT (Contents and Interactions) Programme

(2)  Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Appel à communications, Colloque final ADAM

Appel à communications pour le colloque final du projet ADAM

2-3 octobre 2014, MINES ParisTech

 Logo Principal ADAM

Architectures distribuées et réappropriations de l’Internet :

droits, techniques, usages, innovation

Le programme de recherche ADAM – Architecture distribuée et applications multimédias (adam.hypotheses.org) (1) étudie les implications techniques, politiques, sociales, socio-culturelles et légales des architectures de réseau distribuées. Ce terme désigne un type de réseau doté d’un certain nombre de caractéristiques : un réseau composé de multiples unités de calcul, capable de réaliser son objectif en partageant ressources et tâches, tolérant la défaillance de nœuds individuels et donc sans point unique d’échec, capable de passer à l’échelle de manière souple. Au delà de cette définition technique simplifiée, le choix, de la part d’ingénieurs et concepteurs des services Internet, de développer ces architectures par rapport aux modèles centralisés très répandus aujourd’hui, a de nombreuses implications pour l’utilisation quotidienne des services en ligne, ainsi que pour les droits des internautes.

Le colloque final du programme ADAM, ouvert à des disciplines variées telles que les Science and technology studies, les sciences de l’information et de la communication, l’économie, le droit et l’ingénierie des réseaux, s’attache à explorer ces enjeux à l’aune d’une question centrale. Avec la centralisation de plus en plus affichée d’internet et les excès de surveillance qu’elle autorise, quelle place et quel rôle joue la (re-) décentralisation des architectures techniques des réseaux, en un moment historique où les atteintes à la vie privée et les pratiques de surveillance sont très souvent inscrites (embedded) dans l’architecture technique ? Les architectures de réseau distribuées et décentralisées sont-elles, comme le suggère Philippe Aigrain (2), des occasions de réappropriation des services Internet – des outils techniques et de gouvernance susceptibles de renouer avec l’organisation originelle du cyberespace ?

Les communications pourront s’articuler autour d’un ou plusieurs des quatre axes de réflexion suivants, mais ne doivent pas forcément s’y conformer.

  • Un « retour aux origines » ? Passé et présent des architectures distribuées. Le modèle initial de l’Internet suppose une organisation décentralisée et symétrique – non seulement en termes de consommation de bande passante, mais aussi en termes de contact, relation et communication entre machines. Au milieu des années 1990, l’explosion commerciale de l’Internet en change radicalement la forme, révélant bientôt les limites – du point de vue du fonctionnement du réseau et de son « usabilité » pour les internautes – d’un modèle qui présuppose une coopération active de tous les membres du réseau. Aujourd’hui, dans un monde de services Internet où les flux et les données convergent vers quelques géants, les expérimentations en matière d’architecture distribuée sont vues comme un retour aux origines. Serait-ce pertinent de considérer ces dynamiques comme co-existence générique de différents niveaux de centralisation des ressources, de hiérarchisation des pouvoirs, et de coopération des internautes plutôt que comme domination d’un principe organisationnel à différents moments de l’histoire ? A-t-on affaire à une « guerre des mondes », dont les tensions autour de la surveillance ne sont que l’illustration la plus récente ?
  • La (re-)décentralisation, une alternative durable pour l’écologie Internet ? Les caractéristiques techniques des architectures distribuées (efficacité, mise en relation directe, tolérance aux pannes) et leur capacité à promouvoir l’émergence de principes organisationnels, sociaux et légaux (sécurité, privacy, reconnaissance de droits) offrent des nouvelles pistes d’exploration et de maintien des équilibres au sein de l’écologie Internet. En même temps, la route vers la re-décentralisation est loin d’être linéaire. Selon les modalités d’agrégation, les dispositifs de mise en visibilité, les outils de communication fournis, et la nature des modèles d’affaire concernés (voir la difficulté à en envisager de durables), les usagers qui constituent les nœuds des systèmes P2P peuvent produire des collectifs très différents dans leur nature, leur extension et leur motivation à s’investir dans les systèmes distribués. Plusieurs fournisseurs d’accès au réseau, ayant programmé leurs systèmes sur l’idée que les usagers passent le plus clair de leur temps en ligne à télécharger données et informations depuis des serveurs centraux, soulèvent des objections de nature économique aux modèles P2P. Enfin, les développeurs eux-mêmes doivent parfois, lors du passage à l’entrepreneuriat, revoir leur choix de décentralisation, face à des appropriations inattendues par les utilisateurs, l’impossibilité de rendre aisée la technologie distribuée au grand public, ou encore la simplicité, séductrice pour le « contrôleur », des modèles d’affaires et des infrastructures centralisé(e)s.
  • Décentralisation et nouvelles répartitions de compétences, autorisations, droits. Comment l’architecture distribuée redéfinit-elle les compétences, la capacité de contrôle, les droits des utilisateurs ? Comment le droit peut-il accompagner les pratiques et leur diversité, au lieu de les contrer ? La décentralisation des services Internet pose de nombreuses questions au croisement entre droit et technique. Quelles sont les différences par rapport aux architectures centralisées que les utilisateurs ne peuvent pas modifier, et dans lesquelles les données sont stockées sur des serveurs centraux détenus par les fournisseurs de services? Quelles sont les implications des emplacements physiques des contenus, des types spécifiques de licences, des procédures de collecte et gestion de données ? En quoi l’introduction dans l’architecture technique d’éléments tels que l’encryptage, la fragmentation des fichiers, la mise à disposition d’espace disque, a-t-elle des conséquences pour l’encapacitation (empowerment) et la montée en compétence des utilisateurs ? Si c’est surtout le droit d’auteur qui a d’abord été chamboulé par les réseaux P2P « première génération », les services Internet décentralisés nous amènent à concentrer notre attention sur des questions telles que la redéfinition des notions de créateur et de distributeur, ou la responsabilité des intermédiaires techniques, l’inscription de l’exécution du droit dans les dispositifs techniques.
  • Quels sont les modèles communicationnels en jeu dans ces nouvelles infrastructures et architectures ? Les services internet  distribués ont transformé et transforment toujours aujourd’hui la façon dont les acteurs se représentent leurs capacités communicationnelles et leurs responsabilités dans le partage de l’information. L’encapacitation des utilisateurs ouverte par nombre de ces services peer-to-peer qui tendent de plus en plus à devenir mobiles, auto-configurables et flexibles dans leurs utilisations et dans les fonctions qu’ils offrent ouvre des perspectives innovantes sur les fonctionnalités des infrastructures de communication et leurs capacités de médiation entre des acteurs. De quelle manière cette évolution transforme-t-elle les données et les canaux d’information ? Comment les valeurs de ces architectures et les relations entre leurs participants sont-elles représentées face aux autres services internet mais également à l’intérieur des sphères où l’on conçoit, discute et fait circuler ces objets ? Quelles sont les nouvelles formes de contribution, Qu’engagent-elles des pratiques pédagogiques et de litératies partagées ? Enfin, de quelles manières ces infrastructures distribuées interrogent-elles la notion de bien commun informationnel ?

Nous sollicitons des propositions de communication en français et en anglais, sous la forme d’un résumé de 500-800 mots, à l’adresse francesca.musiani@mines-paristech.fr. La date limite pour l’envoi des propositions est le 5 juin 2014 et le comité de programme enverra les notifications d’acceptation au plus tard le fin juin 2014. Les articles complets devront nous parvenir au plus tard le 15 septembre 2014 (le colloque ayant lieu à Paris, le 2 et 3 octobre 2014). Des possibilités de publication collective (ouvrage ou numéro spécial d’une revue) seront envisagées par la suite.

Equipe du projet ADAM et comité de programme

Maya Bacache, Département SES, Télécom ParisTech

Danièle Bourcier, CERSA, CNRS

Primavera De Filippi, CERSA, CNRS

Isabelle Demeure, INFRES, Télécom ParisTech

Mélanie Dulong de Rosnay, ISCC, CNRS

Annie Gentès, CoDesign Lab, Télécom ParisTech

François Huguet, CoDesign Lab, Télécom ParisTech

Alexandre Mallard, CSI, MINES ParisTech

Cécile Méadel, CSI, MINES ParisTech

Francesca Musiani, CSI, MINES ParisTech

(1)  Financé par l’ANR (Programme CONTINT – Contenus et Interactions)

(2)  Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Histoire d’un ‘cloud P2P’

Alors que de nouveaux acteurs tels que Bitcloud et Maidsafe se présentent avec des solutions P2P sur la scène très peuplée du cloud, voici mon dernier article pour l’Internet Policy Review, qui raconte l’histoire d’un ‘cloud P2P’ d’il y a quelques années.

 

Decentralised internet governance: the case of a ‘peer-to-peer cloud’

13 February 2014

The architecture of a networked system is its underlying technical structure – its logical and structural layout. In my last article for the Internet Policy Review (Musiani, 2013a), I have built upon the work of several authors in science and technology studies, economics, law and computer science (e.g. Star, 1999; van Schewick, 2010; Elkin-Koren, 2006; Agre, 2003) to discuss the idea of network architecture as internet governance. I have suggested that, by changing the design of the networks subtending internet-based services and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

This article retraces the early stages of development of a ‘peer-to-peer cloud’ storage service, Drizzle, with the aim of providing an example of decentralised network architecture as internet governance ‘in practice’. More specifically, the paper sheds light on how changes in the architectural design of networked services affect the circulation, storage and privacy of data, as well as the rights and responsibilities exerted by different actors on them. This article does not mean to be a compendium of the implications of the decentralisation option in building a cloud platform, which entails a number of technical complications as well as advantages, including how to ensure the reliability and redundancy of data, and the soundness of the encryption mechanism. However, the privacy-related design choices described here are some of the many possible ways to illustrate the extent to which changes in network architecture are, indeed, changes to network governance.

Decentralising the cloud

In early 2007, when Drizzle first sees the light, the industry of online data storage – a service allowing users to store, save and share data on one or several terminals connected to the internet – has “never felt better” (Guerrini, 2010). Google, Amazon, Microsoft and Oracle, to name but a few, propose their storage platforms, each with its specificities and one common denominator: the ‘cloud’. According to this model, the service provider is in charge of both the physical infrastructure and the software. Thus, the service provider hosts applications and data at once – in a location, and according to modalities, unknown or at best ambiguous to the user (Mowbray, 2009). The so-called ‘server farms’ proliferate, to support and manage this increasing remoteness of data from users and users’ terminals.

In this context, Drizzle1, a small start-up founded by two developers and computer programmers who we will call Dietrich and Kurt, makes an unusual foundational decision: its cloud storage platform will mainly be composed – alongside more ‘classical’ data centres – of portions of the users’ hard disks, directly linked in a peer-to-peer, decentralised network architecture (Schollmeier, 2001; Taylor & Harrison, 2009). This choice entails a number of peculiar features. On the one hand, the implementation of a technical process defined as “encrypted fragmentation”2, which consists in encrypting locally – on the user’s computer, and by means of a previously installed Drizzle P2P client – the content that will be stored. The content is then divided into fragments, duplicated to ensure redundancy, and spread out to the network. In return, users need to accept to ‘pool’ – put at the disposal of other users and their computers – the computational and material resources necessary for the operations related to the storage of content. As the service’s terms of use point out:

“The user acknowledges that Drizzle may use processor, bandwidth and hard disk (or other storage media) of his computer for the purpose of storing, encrypting, caching and serving data that has been stored in Drizzle by the user or any other users. The user can specify the extent to which local resources are used in the settings of the Drizzle client software. The amount of resources the user is allowed to use in Drizzle depends on the amount of local resources the user is contributing to Drizzle.”

The interdependent and egalitarian model subtending the platform will allow its users to barter their local disk space with an equivalent space in the decentralised cloud, thereby improving the quality of this storage space, which will become permanently available and accessible. By shaping their decentralised storage service, the developers of Drizzle carry on a double experimentation: with the frontier between centralisation and decentralisation, and with sharing modalities that blend peer-to-peer, social networking and the cloud.

Peer-to-peer storage: the cloud meets privacy by design

“In 2007, it was all starting to get social,” Dietrich recalls three years later. Indeed, social media, Facebook and Twitter in particular, were at that moment entering the daily life of millions of internet users in an increasingly pervasive way. Drizzle’s first steps are taken in a community of research and development that tries to counter the social media “explosion” by developing P2P systems as an alternative to a variety of internet-based services, including social networks,  structured in a centralised manner (Le Fessant, 2009; Musiani, 2010a; Musiani, 2010b).

In 2007, Facebook had been in existence for three years. Millions of users had taken part in it, thereby contributing to the massive success of these Web-based services that allow individuals to build a public or semi-public profile within a system, define a list of other users with whom to interact, and see/browse the list of their and others’ connections made in ‘public mode’ within the system (Boyd & Ellison, 2007). In parallel to their spectacular growth, social networks raise vibrant discussions and controversies, both within the expert community and among the general public. The ways in which social networking service providers leverage personal information and user data remains controversial, since they sometimes mean allowing external applications to access them, while on other occasions they pursue direct commercial purposes (Boyd, 2008). The rise of the so-called cloud does nothing to mitigate the impression of risk for informed users, as applications and data are increasingly hosted in locations and ways unknown or at best ambiguous. User exposure on social networking sites and on cloud-based services positions privacy, more than ever, at the foreground of discussions.

In this context, several developers – including Drizzle’s – identify in a peer-to-peer type of network architecture a possible way of approaching the protection of personal data privacy with a different angle: through the relocation and “re-appropriation” of data within the terminals of users, who would be able to host their own profiles and the information they contain (see also Moglen, 2010; Aigrain, 2010, 2011).

As in the development of Drizzle, a conception of privacy and confidentiality of personal data, which is conceived of and enforced via technical means – called privacy by design (Cavoukian, 2010; Schaar, 2010), is at work. This conceptualisation of privacy is defined by means of the constraints and the opportunities linked to the treatment and the location of data, according to the different moments and the variety of operations taking place within the system. In particular, the confidentiality of data (personal data as well as the content stored in the P2P cloud) is defined by a peculiar role and enhanced features attributed to the password that identifies the user vis-à-vis the network, and by the implementation of the resource allocation system on which Drizzle is based.

Password and user responsibility

In Dietrich’s intentions, the role of the user-selected and user–generated password for the Drizzle system should have “stri[cken] the user as soon as he had access to the system for the very first time.” Indeed, the virtual form that is served to users upon subscription may come as a surprise: it informs that

“We do not know your password as it never leaves your computer. Please, do not forget your password and use, if needed, your password hint.”

The status of the password is thus negotiated, beyond its usual meaning of unique identifier vis-à-vis the system, to define, detail and legitimise the process of local encryption and decryption of data within the Drizzle system. This feature comes to symbolise the specificity of Drizzle’s promise of security and privacy as well as users’ trust, as it becomes the symbol and the graphical representation of the ‘local’ dimension of the encryption process – as it never leaves the computer of the user who created it. The operations, for the most part automatically managed, that are linked to the protection of personal data are thus hosted on the terminals of users. Indeed, this entails a modification of the user’s role within the service’s architecture: node among equal nodes, it becomes a server itself, instead of a starting point and a final point for operations that are otherwise conducted on another machine or group of machines.

Through the attribution of this status to the password, the developers of Drizzle are also proposing an alternative to the balance between the rights exerted by users on their own data and the rights acquired by the service provider on these same data – a balance that is usually heavily bent on the provider’s side. However, this reconfiguration in the balance of rights comes with a trade-off. As the password stays with the user and is not sent to the servers controlled by the firm, the latter cannot retrieve the password if needed. Thus, users do not only see their privacy reinforced, but at the same time and for the same reasons, the responsibility for their actions is augmented – while the service provider renounces to some of its control on the content that circulates thanks to the service it manages. The meaning of this ‘renunciation’, Dietrich explains, is double: on the one hand, the Drizzle team wishes to make it evident, almost translate into a specific object the user can easily relate to, the ‘obscure’ and unfamiliar process of client-side encryption, which is an ongoing source of controversies and perplexities. On the other hand, it is also a matter of Drizzle’s business model: the more the firm knows about its users, the more it is mandatory for it to submit the users to regular surveillance and control – and this requires an investment of material resources and time that, in its first phases of existence, the firm does not have:

“If we can know what is in your account, starting with your password, we have heightened obligations to police the content and to make sure nobody can eavesdrop on the traffic.”

Data privacy and resource allocation

Another aspect that contributes to define rights and responsibilities is the detailing of the conditions for allocation and management of the computational resources provided by the different computers participating in the system.

As briefly described above, the choice to decentralise the platform makes it necessary, due to the very particular status of the resources used by the system, to detail several aspects in the terms of use: the role of computers belonging to users, the types of resources that Drizzle is able to use, their purpose. It also becomes necessary to detail the extent to which users are able to decide – and communicate to their P2P client, thus to the system – the maximum quantity of local resources that the rest of the network/storage system can use. However, it is also necessary to define the articulation between the availability of resources and the different operations to which these resources will be destined to within the system.

The articulation of these two aspects has important implications for the confidentiality of data circulating in the system (both personal information and content stored by users). Several users, giving feedback to the developers in the early stages of the system, warn that the resource allocation process could be framed as a possible ‘surveillance’ or ‘monitoring’ of these resources, in a way that can potentially be highly automatised, invasive, privacy-threatening.

After a discussion between these concerned users and the developers, via the Drizzle forum, two modifications were applied to the terms of use: while the general terms now state that “resources are allocated and monitored in accordance with the Privacy Policy,” the privacy policy itself details the extent of automation and pervasiveness of the system that allocates and monitors resources:

“In order to ensure a fair allocation of resources within Drizzle, various data about the computers participating in the Drizzle network is collected. This data includes their IP addresses, disposability and the amount of resources they are contributing (e.g. bandwidth, memory). […] Drizzle keeps track of how much storage space you have used and earned […] Drizzle collects statistical information for the purposes of monitoring, debugging and improving the system. This includes automatically generated problem, performance, network analysis and general usage reports, as well as logs of the connections and queries made to Drizzle’s servers (including the involved IP addresses), as well as analytical data about the usage of the Drizzle website. However, none of this data contains information from your private or shared files.”

Thus, the correct functioning of the allocation system indeed implies the gathering of several pieces of information concerning the material, computational and memory resources pooled by each participating computer. The pooling of the storage equipment (i.e., users’ local resources, made available by each of them) is necessary for the system to work; however, it is not meant to imply an intrusion in the stored content itself, which remains protected by the local safeguard of the password and the encryption of content. The collection of information, the developers of Drizzle affirm, has the purpose of automatically computing the storage space made available by each user – and, as we have analysed elsewhere (Musiani, 2013b), of establishing the extent to which each user can reclaim her place in the ‘P2P cloud’, an equivalent storage space in the network of participating users.

Conclusions

The development of Drizzle’s ‘peer-to-peer cloud’ allows to observe how changes in the architectural design of networked services affect data circulation, storage and privacy – and in doing so, reconfigure the articulation of the ‘locality’ and the ‘centrality’ in the network (Akrich, 1989: 39), suggesting a model of decentralised governance “by architectural design” for the service.

Ultimately, decentralising the cloud leads to a reformulation and ‘re-balancing’ of the relationship between the user and the service provider. The local, client-side encryption of data first, and its fragmentation afterwards – both operations conducted within the P2P client installed by the user, and entirely taking place on his terminal – are proposed by Drizzle as evidence that the firm, in its own words, “does not even have the technical means” to betray the trust of users.

In particular, this conception of privacy by design takes shape around the password, that remains locally stored in the user’s P2P client and unknown to the service provider. In doing so, it becomes a form of disengagement of the service provider with respect to security issues, its ‘auto-release’ from responsibility: a detail whose importance may seem small at first, but eventually leads to changes in the forms of technical solidarity (Dodier, 1995) established between users and service provider.

For the purpose of this article, I have focused in particular on aspects such as the strengthening of privacy by design and the increase in responsibility attributed to the user, arguably among the “positive” aspects of a peer-to-peer cloud. However, it should be pointed out that an important part of the decentralisation choice made by the Drizzle team has involved assessing its possible downsides: reliability and redundancy of data, slow downloading performances, soundness of the encryption mechanism, and – no less important – the perception of these issues by users. A heated discussion among developers, and between developers and some pioneer users, also occurred on the topic of the ‘legality’ of the system, especially in jurisdictions such as that of the United States. All of these are complex issues and most of them could not be accounted for here – it has been done in a much more detailed manner elsewhere (Musiani, 2013: 123-173), by analysing, with tools derived from the field of science and technology studies (STS), a number of socio-technical controversies related to the development of the platform. However, the privacy-related dynamics provided here are a few of the several possible ways to flesh out the extent to which changes in network architecture are, indeed, changes in network governance.

The example of Drizzle has illustrated in practice the implications of ‘architectures as governance’ we had introduced in the previous article: the repartition of competences and responsibilities between service providers, content producers, users and network operators; the articulation between the individual and the collective; the shaping of user rights and ‘community’ norms; the definition of ‘contributor’ in internet-based services. In light of Edward Snowden’s leaks about certain surveillance practices by the US National Security Agency, the potential of architectural choices – choices that would make the internet less centralised and more distributed – as a means of de facto privacy advocacy and promotion of decentralised governance has never been more evident. The goal, as The New Yorker recently reported, “isn’t to end surveillance, but to make it harder to do en masse” (Kopstein, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Aigrain, P. (2011). “Another Narrative. Addressing Research Challenges and Other Open Issues session.” PARADISO Conference, Brussels, 7–9 Sept. 2011.

Akrich, M. (1989). “De la position relative des localités. Systèmes électriques et réseaux socio-politiques.” Cahiers du Centre d’Études pour l’Emploi, 32 : 117-166.

Boyd, D. (2008). “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence, 14 (1).

Boyd, D. & Ellison, N. (2007). “Social Network Sites: Definition, History, and Scholarship.” Journal of Computer-Mediated Communication, 13 (1).

Callon, M., Lascoumes, P. & Barthe, Y. (2001). Agir dans un monde incertain. Essai sur la démocratie technique, Paris: Seuil.

Cavoukian, A. (eds., 2010). Special Issue: Privacy by Design: The Next Generation in the Evolution of Privacy. Identity in the Information Society, 3(2).

Dodier, N. (1995). Les Hommes et les Machines. La conscience collective dans les sociétés technicisées. Paris: Métailié.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Guerrini, Y. (2010). “Wuala : le P2P comme solution de stockage.” http://www.presence-pc.com/actualite/Wuala-stockage-cloud-P2P-39035/#xtor=RSS-11

Kopstein, J. (2013). “The mission to de-centralize the Internet.” The New Yorker, 13 December 2013, http://www.newyorker.com/online/blogs/elements/2013/12/the-mission-to-decentralize-the-internet.html

Le Fessant, F. (2009). “Les réseaux sociaux au secours des réseaux pair-à-pair.” Défense nationale et sécurité collective, 3 : 29-35.

Moglen, E. (2010). “Freedom in the Cloud: Software Freedom, Privacy and Security for Web 2.0 and Cloud Computing.” Keynote, ISOC Meeting, New York Branch, 5 February 2010.

Mowbray, M. (2009). “The Fog over the Grimpen Mire: Cloud Computing and the Law.” SCRIPTed, 6(1): 132-146.

Musiani, F. (2013a). “Network architecture as internet governance.” Internet Policy Review, 24 October 2013, http://policyreview.info/articles/analysis/network-architecture-internet-governance

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris : Presses des Mines.

Musiani, F. (2012). “Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology.” Journal of Peer Production, 1.

Musiani, F. (2010). “Ménager le droit à la vie privée, entre anonymat et connaissance de l’identité: les débuts des réseaux sociaux en pair-à-pair.”  Terminal, 105: 107-116.

Musiani, F. (2010b). “When Social Links Are Network Links: the Dawn of Peer-to-Peer Social Networks and Its Implications for Privacy.” Observatorio, 4(3), 185-207.

Schaar, P. (2010). “Privacy by Design.” Identity in the Information Society, 3(2): 267-274.

Schollmeier, R. (2001). “A definition of peer-to-peer networking for the classification of peer-to-peer architectures and applications.” Proceedings of the First International Conference on Peer-to-Peer Computing, 27–29.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

Taylor, I. & Harrison, A. (2009). From P2P to Web Services and Grids: Evolving Distributed Communities. Second and Expanded Edition. London: Springer-Verlag.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press

Vinck, D. (Ed., 2003). Everyday Engineering. An Ethnography of Design and Innovation. Cambridge, MA: The MIT Press.

Footnotes

1. The name is fictitious (‘light rain’) and recalls the fragmentation and the distribution of data in the system’s storage mechanism. The names of the developers are pseudonyms, as well. I have no direct interest in Drizzle – I use it as a case study of a possible ‘decentralisation of the cloud’.

2. Unless otherwise noted, citations are derived from in-depth interviews with the developers of Drizzle, conducted within a period of online and ‘live’ ethnography of Drizzle’s development, design and innovation process (see Vinck, 2003) between 2010 and 2011.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Un guide aux “technologies discrètes” de la gouvernance de l’Internet

9780300181357Les lecteurs du blog ADAM pourraient être intéressés par ma note de lecture du récent livre de Laura DeNardis, The Global War for Internet Governance (en anglais), également disponible sur le site américain d’Amazon.

A Guide to the “Technologically Concealed” in Internet Governance, January 21, 2014

Book Review: Laura DeNardis (2014). The Global War for Internet Governance. New Haven, CT and London : Yale University Press.

The final draft of Laura DeNardis’s most recent book, officially released on January 14th, 2014, had most likely been finalized before Edward Snowden’s recent revelations about the pervasive surveillance implemented by the U. S. National Security Agency entered the media spotlight, which explains the absence of direct references to the controversy throughout the 300-page volume. Yet, because of the Snowden revelations and a number of other issues addressed thoroughly in this extremely important book – from WikiLeaks to the SOPA and PIPA bill projects – the exploration of Internet governance (IG) issues through a “global war” lens has never been more relevant than it is today. Information and communication technologies, the Internet first and foremost, are increasingly mobilized to serve broader economic, political and military aims, ranging from the theft of strategic data to the hijacking of industrial systems. The rise of techniques, devices and infrastructures destined to digital espionage, data collection and aggregation, tracking and surveillance is highlighted not only by the recent Snowden revelations, but also by the construction and the organization of a dedicated, increasingly widespread and lucrative market.

As an interdisciplinary scholar of Internet governance grounded in Science and Technology Studies (STS) myself, I had been very much looking forward to the release of this book, which will undoubtedly prove to be a central reference for Internet governance as an emerging field of study in the coming years. As she had already and so ably done in Protocol Politics (2009), Laura DeNardis builds on her interdisciplinary training as an information engineer and STS scholar to untangle – and richly account for – the “technologically concealed and institutionally complex ecosystem of governance” that permeates today’s Internet. In doing so, she contributes to unveil what media and policymaker accounts of Internet governance all too often cause to stay out of the public radar.

While this book is also, indirectly, a means to revisit and update the debates on “multistakeholderism”, prominent in some IG venues such as the Internet Governance Forum, the author’s main interest rests unambiguously with the “technologically concealed” and its socio-technical agency: the extent to which non-human actors (to put it in STS vocabulary) such as information intermediaries, critical Internet resources, Internet exchange points and security devices play a crucial “governance” role alongside political, national and supra-national institutions and civil society organizations. Throughout the book, Laura DeNardis explores how Internet governance takes shape in the myriad of infrastructures, devices, data fluxes and technical architectures that – discreet, often invisible, yet no less crucial – subtend and build the increasingly public and articulate “network of networks”.

The author’s conceptual framework is shaped by five core research questions: how arrangements of technical architecture are, inherently, arrangements of power and politics, which can be revealed by bringing infrastructures to the foreground; how “traditional power” structures are increasingly mobilizing Internet governance technologies as proxies for content control; how Internet governance is increasingly privatized, enacted by corporations and non-governmental entities, in areas as diverse as privacy, control of online financial flows, censorship, and copyright enforcement; how decisions implemented within technical spaces on the Internet reflect conflicts over competing sets of values, rights, policy norms, as well as ongoing negotiations of the values subtending Internet architecture; and finally, how the variety of “local Internets” and the stability of the global Internet intersect and mutually influence each other, calling for a “carefully planned global governance framework” (p.18), a luxury that the rapid pace of innovation, the impressive scaling, and the diversification of uses have almost never allowed to Internet architecture in its global era.

This five-pronged framework opens the door to Laura DeNardis’s exploration and narrative of Internet governance as an ensemble of controversies and battles over “control points”, a narrative which constitutes the remainder of the book. These control points range from the deepest layers of Internet infrastructure to the “last mile” of user access to the network; from the blocking of financial flows to the deliberate “kill-switches” of Internet-based services; from the “graduated response” termination of domestic Internet access to the attempted use of the Domain Name System for copyright enforcement purposes; from the Internet’s backbone infrastructure to the establishment of interconnection agreements; and finally, the de facto public policy role assumed by private information intermediaries in the variety of instances where they gather, collect, aggregate, select, present data to users and to other actors of the Internet value chain — thereby enacting governance over privacy, freedom of expression, cultural diversity and reputation.

The author’s training as an engineer provides the background and the tools for an exploration of Internet governance that I have described elsewhere as “not afraid of its subject of study” (Musiani, 2012): able to resist the temptation of an excessive “institutionalization” of IG, to avoid recoiling from the dense, intricate, complex, technically-grounded substrate of Internet governance power struggles, and to embrace the challenge of accounting for it in a detailed yet engaging way. While the methodological toolbox and narrative devices of STS are, unambiguously, precious instruments for the author, enabling her to achieve these objectives in a successful manner, this book is not “blatantly” STS. The vocabulary of actor-network, delegation, black boxes, co-production is there as a means, not an end in itself; the references to Geoffrey Bowker and Susan Leigh Star’s work on standards (1996), to Bruno Latour’s musings on technical mediation (1994), to Michel Callon’s sociology of translation (1986), to Tarleton Gillespie’s “politics of platforms” (2010) are tools, not enumerations of the obligatory literature review; the description of socio-technical controversies is ever-present, but weaved discreetly into the narrative.

As Jeanette Hofmann once wrote, Internet governance is a “regulative idea in flux” (2007). Indeed, the search for concepts, tools and categories to make sense of 21st century Internet governance, both as a set of practices and technologies and an academic field of study, is very much open-ended, unresolved and problematic. The conclusion of The Global War for Internet Governance ties together beautifully the variety of “stress factors” that Internet control points will likely keep on being subjected to in the immediate future: increasing international pressure to introduce additional regulation at interconnection points; greater governmental control; technology-embedded threats to privacy; reduction of anonymity and its consequences for freedom of expression; loss of platform interoperability; and finally, “creative” uses and misuses of Internet infrastructure and their impact on the Internet’s security and stability. In this sense, Laura DeNardis’s work is indeed a blueprint for an infrastructure- and architecture-based “Bill of Rights” for the Internet — and extremely interesting, required reading in order to understand more thoroughly the indispensible “backstage” of today’s highly-mediatized Internet politics.

References

Bowker, Geoffrey C. and Susan Leigh Star (1996). “How Things (Actor-Net)work: Classification, Magic and the Ubiquity of Standards”, Philosophia, November 18; 1996.

Callon, Michel (1986). “Elements of a Sociology of Translation”, in John Law (ed.), Power Action and Belief: A New Sociology of Knowledge?, London: Routledge.

DeNardis, Laura (2009). Protocol Politics: The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Gillespie, Tarleton (2010). “The Politics of ‘Platforms’”, New Media and Society, 12 (3)

Hofmann, Jeanette (2007). “Internet Governance: A Regulative Idea in Flux”, in Ravi Kumar, Jain Bandamutha (eds.), Internet Governance: An Introduction, Hyderabad: The Icfai University Press, pp. 74-108.

Latour, Bruno (1994). “On Technical Mediation”, Common Knowledge, 3 (2): 29-64.

Musiani, Francesca (2012). “Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology”, Journal of Peer Production, 1.

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

“Pursuit” et le nouveau Internet

Comment envisager une nouvelle infrastructure globale pour l’Internet, susceptible de mieux répondre aux exigences des utilisateurs? Cet enjeu de taille est abordé par le projet européen Pursuit. Son objectif? Un réseau qui donne “priorité à l’information plutôt qu’à la location”. En savoir plus sur Business Weekly.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

La forme physique des architectures virtuelles

Balaji Srinivasan nous en parle sur Wired à propos des “cloud communities”:

http://www.wired.com/opinion/2013/11/software-is-reorganizing-the-world-and-cloud-formations-could-lead-to-physical-nations/

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Architectures de réseau et (comme) gouvernance d’Internet

Cet article, en anglais, est paru en octobre 2013 sur l’Internet Policy Review. Merci à Frédéric Dubois, Uta Meier-Hahn, Andrej Savin et Rikke Frank Joergensen pour leur relecture et leurs commentaires.

 

Network architecture as internet governance

The architecture of a networked system is its underlying technical structure, designed according to a “matrix of concepts” (Agre, 2003). It constitutes the logical and structural layout of a system, including transmission equipment, communication protocols, infrastructure, and connectivity between its components or nodes. This article introduces the idea of network architecture as internet governance1, and more specifically, it outlines the dialectic between centralised and distributed architectures, institutions and practices, and how they mutually affect each other.

Technical architectures, as argued by several authors discussed in this article, may be understood as alternative ways of influencing economic systems, sets of rules, communities of practice – indeed, as the very fabric of user behaviour and interaction. The status of every internet user as consumer, sharer, producer and possibly manager of digital content is informed by, and shapes in return, the technical structure and organisation of the services she has access to. It is in this sense that network architecture is internet governance: by changing the design of the networks subtending internet-based services, and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

Architecture, “politics by other means”

“Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change,” once wrote science and technology studies (STS) scholar Susan Leigh Star (Star, 1999, p. 339). Indeed, the history of internet innovation suggests that the shaping of technical architectures populating the network of networks is, in the words of philosopher Bruno Latour, “politics by other means” (Latour, 1988, p. 229). The ways in which architecture is politics, protocols are law, code shapes rights (e.g., Lessig, 1999; DeNardis, 2009), are explored today by a number of different authors in relation to networked and online media; in particular, internet-related research has contributed to foster the debate on the intersection and overlap of governance by architecture with other forms of governance. This section, while not pretending to be exhaustive, discusses some key approaches to the question.

Interested in the relationship between architectures and the organisation of society, Terje Rasmussen (2003) has argued that there is a structural match between the development of the technical model of the internet (such as packet switching and distributed routing) and the transformation of the societies in which it operates. In this account, the technical infrastructure of the Internet suggests that ours is a distributed society, based on the ability to handle risk, rather than on central control. On the other hand, information studies scholar and internet pioneer Philip Agre suggests that “Decentralized institutions do not imply decentralized architectures, or vice versa. […] Architectures and institutions inevitably coevolve, and to the extent they can be designed, they should be designed together” (Agre, 2003, p. 42), but they are not “naturally” related.

IT law scholar Barbara van Schewick seeks to examine how changes, notably design choices, in internet architecture affect the economic environment for innovation, and evaluates the impact of these changes from the perspective of public policy (2010, p. 2). According to her, this is a first step towards filling a gap in how scholarship understands innovators’ decisions and the economic environment for innovation. After many years of research on innovation processes, we understand how these are affected by changes in laws, norms, and prices; yet, we lack a similar understanding of how architecture and innovation impact each other, perhaps for the intrinsic appeal of architectures as purely technical systems (ibid., p. 2-3). Traditionally, she concludes, policy makers have used the law to bring about desired economic effects. Architecture de facto constitutes an alternative way of influencing economic systems, and as such, it is becoming another tool that actors can use to further their interests (ibid., p. 389).

The relationship between architecture and law-making for networked media has been an increasingly central interdisciplinary preoccupation since the late 1990s/early 2000s. Early uses of the metaphor “code is law” can be found in William Mitchell’s City of Bits (1995) and in Joel Reidenberg’s article on lex informatica, the formation of information policy rules through technology (1998). However, legal scholars Yochai Benkler and Lawrence Lessig have arguably been the “scene-setters” in this field, with their work on sharing as a paradigm of economic production in its own right (2004) and technical architecture as politics (1999), respectively. While the former argued for the rise of a “networked information economy” as a system of “production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means” (Benkler, 2006), the latter introduced technical architecture as one out of the four main (and interconnected) society regulators, the other three being law, market and norms. The application of this principle to the text of computer programmes led to what remains, perhaps, the most striking incarnation of the famous “code is law” label (Lessig, 1999).

Among the scholars that have since been inspired by this line of inquiry, Niva Elkin-Koren is especially relevant. In her work (e.g., 2006, 2012), architecture is understood as a dynamic parameter in the reciprocal influences of law and technology design, in the field of information and communication systems. The interrelationship between law and technology often focuses on one single aspect, the challenges that emerging technologies pose to the existing legal regime, thereby creating a need for further legal reform; however, the author argues, juridical measures involving technology both as a target of regulation and as a means of enforcement should take into account that the law does not merely respond to new technologies, but also shapes them and may affect their design (Elkin-Koren, 2006).

The work of Tim Wu adds layers to the conceptualisation of code’s relationship with law, moving from Lessig’s concept that computer code can substitute for law or other forms of regulation, to code as an anti-regulatory mechanism tool that certain groups will use to their advantage to minimise the costs of law – the possibility of “using code design as an alternative mechanism of interest group behavior” (Wu, 2003).

Architecture and the future(s) of the internet

The current trajectories of innovation for the internet are making it increasingly evident by the day: the evolutions (and in-volutions) of the network of networks are likely to depend in the medium-to-long term on the topology and the organisational/technical model of internet-based applications, as well as on the infrastructure underlying them (Aigrain, 2011).

This is illustrated by what has been this author’s main research focus over the past few years: the development of internet-based services – search engines, storage platforms, video streaming applications – based on decentralised network architectures (Musiani, 2013b).

The concept of decentralisation is somehow shaped and inscribed into the very beginnings of the internet – notably in the organisation and circulation of data packets – but its current topology integrates this structuring principle only in very limited ways (Minar & Hedlund, 2001). The limits of the concentrated and centralised urbanism of the internet, which has been predominant since the beginning of its commercial era and its appropriation by the masses, are sometimes highlighted by the same phenomena that has contributed to its widespread success, as best illustrated by social media (Schafer, Le Crosnier & Musiani, 2011). Examples of incidents caused by “excessive concentration” are, for example, the global consequences of the Pakistani YouTube re-routing in 2008 or the repeated failures of Twitter infrastructure (e.g., in 2012). These incidents have put into the spotlight some of the possible limits of the concentration model: excessive control, technical and/or legal, by a single commercial entity; the opaqueness of the modalities of this control vis-à-vis the users; the vulnerability to single-point failures of centralised architectures.

While internet users have become, at least potentially, not only consumers but also distributors, sharers and producers of digital content, the network of networks is structured in such a way that large quantities of data are centralised and compressed within large data centers and server farms. At the same time, such data is most suited to a rapid re-diffusion and re-sharing in multiple locations of a network that has now reached an unprecedented level of globalisation. The current organisation of internet-based services and the structure of the network that enables their delivery – with its mandatory passage points, places of storage and trade, required intersections – raises many questions, in terms of the optimised utilisation of resources, the fluidity, rapidity and effectiveness of electronic exchanges, the security of exchanges, the stability of the network.

Beyond technology, these questions are deeply social and political, and affect the “ramifications of possibles” (Gai, 2007) the internet is currently facing for its close future. Resorting to decentralised architectures and distributed organisational forms, constitutes a different way to address some issues of management of the network, in a perspective of effectiveness, answer to vulnerabilities, digital “sustainable development” (better resource management), and of maximisation of the Internet’s value for society.

Architectures shaping user rights: decentralisation and privacy by design

Systems based on distributed, decentralised, peer-to-peer (P2P) architectures seek their place today in an IT landscape that is mostly one of concentration and removal from users’ machines. From the viewpoint of informational data, personal data and exchanged content, this implies that sharing, regrouping and stocking those data in the most popular, and widespread internet services of today means promoting a model in which traffic is re-directed towards an ensemble of machines, placed under the exclusive and direct control of the service provider. Thus, exchanges between users are made by “copying” data that one wishes to share on one or more external terminals, or by giving these external machines the permission to index this information. The ways in which data circulates, is stored and written in these machines is often uncertain; moreover, the rights that the service provider acquires on such data are often excessive with respect to those maintained by the end user – in such a way that is often opaque for users themselves2.

When the operations of data treatment and handling are conducted, partially or totally, on users’ terminals directly linked together, this choice of network architecture contributes to building specific definitions of privacy protection. It modifies the ways in which the control on informational data, and the responsibility of their protection, are spread out to the users, the service providers and the developers who have created the service.

Three cases of internet services based on a decentralised network architecture – a search engine, a storage platform and a video streaming software, studied between 2009 and 2011 – have shown how a definition of privacy “by design,” more specifically by architectural design, takes shape in internet services (Musiani, 2013b). With this alternative, “techno-legal” way of defining privacy, a central role is attributed to the constraints and the opportunities of privacy protection that are inscribed into the technical model chosen by developers (Schaar, 2010).

Faroo, a P2P search engine developed first in Germany, then in the United Kingdom, displays a “six-levels” distribution model that must prevent the traceability of queries by a central entity; this model is supposed to preserve personal data within the user’s own terminal and the P2P client installed on it – unless they are encrypted on that very terminal before leaving it. This feature also allows the developers to work towards reducing the tension – which is a priori very difficult to eliminate – between the confidentiality of personal information and the personalisation of search queries, the latter being the “added value” that social dynamics add to the search engine and, which is based on the very collection of this personal information.

The case of Tribler, a P2P video streaming tool first developed at the Technical University of Delft (The Netherlands), is another occasion to follow this tension, as the logic underlying the system is that the history of downloads made by a user are shared by default with other users so as to nourish the software’s “recommendation” algorithm. The solution envisaged by the developers has, once again, to do with an idea of “privacy by architectural design”, as it builds on the decentralised and distributed model to mitigate, in the eyes of users, the impression of exposure and revelation of themselves that the system’s social features may provoke: not only can the feature be disabled, but it only sends the download history to other users – it doesn’t keep the information on any server controlled by the service.

Finally, Wuala 3, a (formerly) distributed storage platform developed in Switzerland, displayed similar attempts to protect user privacy via architecture. The heart of this service was the user’s terminal, where, thanks to a dedicated P2P client, the operations of encryption and fragmentation of stored data could take place. These two operations, conducted before any other (e.g., sharing, downloading or circulating data in the network), were meant, in the vision of Wuala’s developers, as evidence given to the users that the service provider, regardless of its intentions, did not even possess the technical means to break user trust in the system.

While developers, across all three case studies, consider that a more articulate protection of privacy is one of the core comparative advantages of their systems (and they “sell” it as such), users wonder, in turn, about the implications of a decentralised architecture for the protection of their data. What does the fact of making available to the whole P2P network a part of one’s own computing resources imply, for the “invisible” data collected there? In the cases of Faroo and Wuala – where the P2P model merges, in a peculiar way, with a proprietary software logic, this question is the occasion to make explicit the difficult articulation between the decentralising philosophy subtending the systems, and a closed source code. Pioneer users – for the most part, users-innovators or users-developers themselves – see the closed code as a lack of transparency, even a lack of respect, that prevents them from delving into this aspect with the tools they have available. It is good to have privacy by architecture, these users point out, but we need to have a direct knowledge of this technique on a case-by-case basis, to, eventually, allow for direct modifications of the architecture.

Decentralised models challenge “by architecture” the extent, the balance and the very definition of the rights obtained by service providers on users’ personal data, vis-à-vis the rights that users maintain on such data. With a trade-off: on the one hand, the user sees her privacy reinforced by the possibility of an augmented control on her data, and its handling by the P2P client. However, simultaneously and for the same reasons, her responsibility for the actions she undertakes within and by means of the application is increased proportionately, as the provider surrenders voluntarily some of his control over the data and content present on the service. The collective dimension of this responsibility is also emphasised, inasmuch as the infraction to the collective behaviour has not only individual but collective consequences- be it the storage of inappropriate content, the introduction of unreliable information or spam in a distributed search index, or a “selfish” management of the bandwidth shared by a P2P streaming system.

Conclusions: how architecture matters

“Arrangements of technical architecture have always inherently been arrangements of power,” writes STS scholar Laura DeNardis (2012): the technical architecture of networked systems does not only affect internet governance, but is internet governance. This governance by architecture, or “governance by design” (De Filippi, Dulong de Rosnay & Musiani, 2013), has important implications at a number of levels, of which the previous section has given but one example.

Changes in architectural design affect the repartition of competences and responsibilities between service providers, content producers, users and network operators. They affect forms of engagement and intéressement (Callon, 2006) in networked systems, of users first and foremost, but also of other actors concerned by the implementation and the operation of internet services. They shape the sustainability of the underlying economic models and the technical and legal approaches to digital content and personal data. They make visible, in various configurations, the forms of interaction between the local and the global, the patterns of articulation between the individual and the collective.

Changes in network architectures contribute to the shaping of user rights, of the ways to produce and enforce law, and are reconfigured in return. A number of legal issues, that go way beyond copyright (despite having often been reduced to this aspect, notably in the case of peer-to-peer systems), are raised by architectural configurations of internet services. To preserve the internet’s “social value,” it is important to achieve reliable forms of regulation – technical, political, or both – without impeding present and future innovation.

Changes in architecture do, finally, contribute to shift the boundary between public and private uses of the internet as a global facility: they are a crucial factor in defining intellectual property rights, the right to privacy of users/clients, or their rights of access to content. They contribute to define what is a contributor in internet-based services, in terms of computing resources required for operating the system, and of content.

In the end, technical architecture appears as one of the strongest, if not the strongest structuring element of internet governance: what is shaped into architecture and infrastructure can seldom be undone by institutional negotiation and dialogue alone, and institutions find it increasingly complicated to keep up with “creative” governance by architecture and by infrastructure4. In this sense, future evolutions of internet governance as a field would do well to take into account Michel van Eeten and Milton Mueller’s suggestion to expand and include innovative areas such as the economics of cybercrime and cyber security, network neutrality, content filtering and regulation, copyright enforcement, and interconnection arrangements among ISPs (van Eeten & Mueller, 2013).

In the digital world, it is possible to design in detail the architecture of the world users interact with – and as a consequence, it is possible to design the architecture of our global communication infrastructure in order to promote specific types of interactions over others (De Filippi et al., 2013). With important consequences for the ways in which the future internet will be governed, and for the extent to which its users will be not only customers, but citizens.

Footnotes

1. Internet governance (IG) today is a lively, emerging field, and its definition relentlessly contested by different groups across political and ideological lines. A “working definition” of IG has been provided in the past, after the United Nations-initiated World Summit on the Information Society (WSIS), by the Working Group on Internet Governance – a definition that has reached wide consensus because of its inclusiveness, but is perhaps too broad to be useful for drawing more precisely the boundaries of the field (Malcolm, 2008): “Internet governance is the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet” (WGIG, 2005). This broad definition implies the involvement of a plurality of actors, and the possibility for them to deploy a plurality of governance mechanisms. IG has been described as a mix of technical coordination, standards, and policies (e.g., Malcolm, 2008 and Mueller, 2010). See also (DeNardis, 2013) and (Musiani, 2013a).

2. See this discussion of the terms of use of several social sites, among which Facebook and Instagram: http://www.nyccounsel.com/business-blogs-websites/who-owns-photos-and-videos-posted-on-facebook-or-twitter/

3. The decentralised mechanism subtending the Wuala system, a trade between local storage space and space in a “P2P storage cloud” spread out to the users, was discontinued in September 2011.

4. An example is the Domain Name System and its co-optations. See (DeNardis, 2012) and (Musiani, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Benkler, Y. (2004). “Sharing Nicely: On Shareable Goods and the Emergence of Sharing as a Modality of Economic Production.” The Yale Law Journal, 114 (2), 273-358.

Callon, M. (2006). “Sociologie de l’acteur-réseau.” In Akrich, M., Callon, M. & Latour, B. Sociologie de la traduction. Textes fondateurs. Paris : Presses des Mines, 267-276.

De Filippi, P., M. Dulong de Rosnay & F. Musiani (2013). “Peer production online communities, distributed architectures and governance by design.” Communication presented at the Fourth Transforming Audiences Conference, September 3, 2013, University of Westminster, London.

DeNardis, L. (2013). “The Emerging Field of Internet Governance”, in W. Dutton (ed.) Oxford Handbook of Internet Studies. Oxford: Oxford University Press.

DeNardis, L. (2012). “The Turn to Infrastructure for Internet Governance”, Concurring Opinions, 2012, http://www.concurringopinions.com/archives/2012/04/the-turn-to-infrastructure-for-internet-governance.html

DeNardis, L. (2009). Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Elkin-Koren, N. (2012). “Governing Access to User-Generated Content: The Changing Nature of Private Ordering in Digital Networks.” In Brousseau, E., Marzouki, M., Méadel, C. (eds.), Governance, Regulations and Powers on the Internet, Cambridge: Cambridge University Press.

Gai, A.-T. (2007). “Web 3.0: une autre branche pour l’arbre des possibles.” Transnets, http://pisani.blog.lemonde.fr/2007/02/17/web-30-une-autre-branche-pour-larbre-des-possibles/

Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press.

Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.

Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA : Terminus Press.

Minar, N. & Hedlund, M. (2001). “A network of peers – Peer-to-peer models through the history of the Internet.” In A. Oram (Ed.), Peer-to-peer: Harnessing the Power of Disruptive Technologies, 9-20. Sebastopol, CA: O’Reilly.

Mitchell, W. J. (1996). City of Bits. Space, Place and the Infobahn. Cambridge, MA: The MIT Press.

Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, MA: The MIT Press.

Musiani, F. (2013a). “A Decentralized Domain Name System? User-Controlled Infrastructure as Alternative Internet Governance”. Presented at the 8th Media In Transition (MiT8) conference, May 3-5, 2013, Massachusetts Institute of Technology, Cambridge, MA. Available as draft at http://web.mit.edu/comm-forum/mit8/papers/Musiani_DecentralizedDNS_MiT8Paper.pdf

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris, Presses des Mines.

Rasmussen, T. (2003). “On distributed society: The history of the Internet as a guide to a sociological understanding of communication and society,” In G. Liestøl, A. Morrison & T. Rasmussen (ed.),  Digital Media revisited : theoretical and conceptual innovation in digital domains, Cambridge, MA: The MIT Press.

Reidenberg, J. R. (1998). “Lex Informatica: The Formulation of Internet Policy Rules Through Technology.” Texas Law Review, 76 (3).

Schafer, V., H. Le Crosnier & F. Musiani (2011). La neutralité de l’Internet, un enjeu de communication. Paris: CNRS Editions/Les Essentiels d’Hermès.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

van Eeten, M. & M. Mueller (2009). “Where Is the Governance in Internet Governance?” New Media & Society, 15 (5): 720-736.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press.

Working Group on Internet Governance (2005). Report of the Working Group on Internet Governance, Château de Bossey, June 2005, http://www.wgig.org/docs/WGIGREPORT.pdf

Wu, T. (2003). “When Code Isn’t Law.” Virginia Law Review, 89.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Vers une société de “perclouds”?

Un “nuage” personnel, relié en architecture P2P? Marco, hacker et écrivain, nous explique sa vision ici:

http://stop.zona-m.net/2013/10/the-real-problem-that-the-percloud-wants-to-solve-and-why-its-still-necessary/

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

McAfee songe au P2P pour contourner la surveillance NSA?

Le créateur de la célèbre entreprise d’anti-virus, McAfee, songerait au peer-to-peer pour contourner la surveillance NSA. Des dispositifs appelés D-central, mobiles et transportables (ainsi que relativement économiques) permettront de créer des réseaux privés et encryptés.

Lire plus de détails ici.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

La gouvernance des algorithmes

Cet article, en anglais, a également paru sur l’Internet Policy Review. Merci à Frédéric Dubois, Uta Meier-Hahn, Madeline Carr et Johan Soderberg pour leurs commentaires et relectures.

Governance by algorithms

09 Aug 2013 by Francesca Musiani

Note on title [1].

Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes “algorithms” beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations”[2] is often taken for granted. At the same time, they are “invoked as powerful entities that control, govern, sort, regulate, and shape everything from financial trades to news media”[3]. Recently (May 16-17, 2013), an interdisciplinary event organised at New York University has addressed this issue through an interesting lens: that of governance – governance by algorithms in addition to governance of algorithms.

Taking stock of the event, which this author attended, the article seeks to contribute to the discussion of “what algorithms do” and in which ways they are artefacts of governance, providing two illustrative examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. Indeed, the question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

The omnipresence of data, the consequences of their organisation

The role of invisibility in the classification processes that order human interaction, the procedures through which categories are made and kept invisible, the ways in which people can change this invisibility when necessary, and the extent to which systems of classification are crucial to the building of information infrastructures have been core preoccupations of science, technology and society scholars for several years.[4] Yet, the issue of information classification and organisation has perhaps never been as relevant as in our current times of “information overload”[5] and internet-mediated access to the vast majority of the information surrounding us.[6] Indeed, digital data seem to proliferate in the complex world of today, building on the variety of platforms and supports that allow for dematerialisation and rapid circulation and distribution. They serve different purposes, from trading to surveillance, from evaluation to recommendation; they are listed, regrouped and organised by means of many supports and devices, from search engines to e-commerce websites. While companies leverage traces left by consumers on the web so as to better target, customise (and take advantage of) their next purchases and interactions, some users worry about the portraits that such traces allow others to paint of them, and of the impossibility to modify or erase them, left to the perusal of generations to come[7].

Arguing that we are currently entering in the era of big data and algorithms, several authors argue that this “is a major breakthrough in the development of digital services (as it) gives decisive importance not only to the owners of data, but also and especially to those who can make them intelligible”[8]. The algorithms subtending the information and communication technologies we daily use, the internet first and foremost, are (also) artefacts of governance, arrangements of power and “politics by other means”[9].

The power of algorithms

By naming a conference held at New York University last May “Governing Algorithms”, its organisers were making a deliberate choice of ambiguity – hinting at both the governance of algorithms, the extent to which political regulation can affect the functioning of the instructions and procedures subtending technology, and the governing power of algorithms themselves.

The ways in which the pervasiveness of algorithms into human society has political implications appear as a core issue of our times; they are a key feature of both today’s information ecosystem[10] and underlying cultural norms[11], as they contribute to the shaping of the information we access and of its organisation. In a recent paper, communication scholar Tarleton Gillespie highlights six dimensions of political valence for algorithms that have public relevance, i.e., those algorithms that are used to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions”[12]. These six dimensions are:

  • patterns of inclusion, the choices behind the constitution of an index, what is included and excluded in it, and how data is “prepared” for the algorithm;

  • cycles of anticipation, the consequences of attempts, by those creating the algorithms, to have information about their users and make predictions on their future behaviours;

  • the evaluation of relevance, the criteria by which algorithms determine what is not only relevant, but appropriate and legitimate;

  • the promise of objectivity, the way the technical nature of the algorithm is presented as a guarantee of impartiality, particularly in the case of controversy;

  • the entanglement with practice, the processes by which users reshape their practices to suit the algorithms they depend on, and turn algorithms into terrains for political contest;

  • finally, the production of calculated publics, the process of algorithmic presentation of publics back to themselves, and how this shapes a public’s sense of itself.[13]

These six dimensions bring to the fore two main consequences of the “computation” of our information society. By delegating to algorithms a number of tasks that would be impossible to perform manually, the process of submitting data to analysis is automated; and in turn, the results of these analyses automate decision-making. This double automation, in turn, poses the question of agency and control[14]. By asking questions such as: who are the arbiters of algorithms? Is algorithm design an assertion of authority over more than the algorithm itself? What is the autonomy of algorithms, if any? – it is the accountability and the responsibility of algorithms as socio-technical artefacts that is examined, that of their creators and users, and ultimately, of the balance of power facilitated or caused by algorithms.

Algorithmic governance: Part I. Web search

The ways in which the web gives more visibility to some information and content than to other is at the very heart of the recurring debate on the defining features of the digital space as a “public space.”  According to Jürgen Habermas, the “father” of the public sphere concept, two conditions are necessary to structure a public space: freedom of expression, and discussion as a force of integration. The architecture of the “network of networks” seems to articulate these two conditions. However, if the first is frequently recognised as one of the widespread virtues of the internet, the second seems more uncertain[15]. In his book The Wealth of Networks, legal scholar Yochai Benkler argues for a global “order” intrinsic to the web, whose core feature is the fact that the selection of information is no longer the monopoly of gatekeepers, journalists, librarians and editors, but is delegated to internet users, now publishers in their own right. By citing and quoting one another in conversational niches, these individuals and groups single out quality information for algorithms, which, in turn, order and classify them and make them available in search engines.[16] Thus, the ordering of web-hosted information appears as a co-production and co-construction of internet users and computational tools.

Algorithms are delegating the integration of conversations and discussions taking place at the micro level. The aggregated arguments that result from this integration are perceived as “implicit universal consensus”; they have both the strengths and the weaknesses of any information that cannot be traced back to any specific individual, and at the same time, results from a wide assemblage of opinions.[17] Search engines, and the multiple measures underlying the internet hierarchise the visibility of information by proposing it at the very beginning of search result lists, or dissimulating it at the end. By de facto deciding “what must be seen,” they are susceptible to encourage or discourage controversy and discussion – while constructing the public agenda of political and social priorities in the process, as well as selecting interlocutors that matter.[18]

In particular, thanks to the current quasi-monopoly that Google holds on web search practices, its PageRank algorithm has been widely examined as the new gatekeeper[19] and “benevolent dictator”[20] of the digital public spaces and spheres. The algorithm implements, according to a “recipe” that partly remains an industrial secret, different sets of measurement criteria that assess authority (according to the number of citations), audience (according to the number of visits or clicks), proximity and affinity (according to recommendations) or speed (according to real-time aggregation and relay of “hot” topics). PageRank, as the “master switch” of the internet,[21] centralises and organises the circulation of information in the network of networks, and for every search interrogation and request, arbitrates on what’s important and relevant.

Algorithmic governance: Part 2: Recommendations in e-commerce[22]

For some years now, online seller Amazon has been “a remarkable prescriber”, whose prescriptions are based on the recommendations of its readers/buyers. The vendor’s website makes it possible for each of its subscribed users to know, in a single click, about other purchases made in the past by users who have acquired the same title[23]. Personalised recommendations are not something new in the world of book publishing and selling, be they digital or not. Simply, a librarian remarks ironically, they historically have been “the exclusive purview of booksellers, librarians… and friends. Now your best friend for advice on reading is called ‘recommendation Al Gorithm’… and it loves you very much!”[24]

Indeed, it is on the systematisation and automation of a very widespread and very social phenomenon – the exchange of advice and guidance among users, sharing preferences and affinities – that Amazon and other online sellers base their recommendation systems. Drawing from methods based on both content (considering two books “similar” if they share a large number of words) and collaborative filtering (the intersection of lists containing particular books and lists based on previous records of books purchased or borrowed by readers), Amazon has developed an algorithm called “item-to-item collaborative filtering”. Its details remain an industrial secret, but the algorithm displays every day its effectiveness in “personalising” recommendations according to the interests of each of its consumers. As its name suggests, rather than match a user with similar users, this algorithm relates each item ordered and purchased by users with similar items, and eventually combines them in a recommendation list.[25]

Behind this algorithm – and causing readers/buyers to think that Amazon knows very well, perhaps too well, their tastes – lie years of research and experiments in a recent subfield of computer science whose practical applications are increasingly widespread, albeit discrete: data mining, in particular affinity analysis and market basket analysis.[26] For readers looking for new things to read, suggestions similar to their previously purchased articles are constructed by relying on a mix of several sources of information about them, feeding a large database where they are combined with other shopping histories. This information can range from the most obvious demographics about oneself and close relatives, to more complex assessments based on the sites one consults before arriving at Amazon, or one’s “habit of clicks.” The entanglements within this large database about the purchasing behaviour of users, activated in accordance with Amazon’s patented algorithm, are the basis of the suggestions familiar to the user, such as “Recommended as you bought…” or “Recommended because you add X to your wish…”, and influence book purchases on Amazon every day.

Algorithms and rules, rule by algorithm

We live in an increasingly algorithmic world. This article has examined, in particular, two cases related to web-based information and communication technologies where the importance of algorithms is high and their presence pervasive. However, the invisible computational structures that guide our search results and our online purchases extend to a number of other contexts, in which algorithms are deployed and regulatory work has been insistently called for in face of recent crises, from facial recognition software to financial markets.[27]

The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, and more generally, of the governance of the complex, automated systems that permeate today’s world.

The academic landscape in the interdisciplinary fields of communication studies, internet studies and science and technology studies reflects a thriving and increasing interest for this question. As an additional path towards answering the key question, “who does the algorithm serve?”, scholars also investigate the historical process from which the algorithm has emerged as a key topic of our times and attempt to situate it in the larger context of political economy.[28]

As not only academic research but current news show ever more frequently,[29] two faces of the algorithms/rules relationship are currently under scrutiny, and are likely to be even more in the close future. On the one hand, there is the issue of institutions’ ruling of algorithms. Should the locus of legal reasoning related to these systems shift to the coding of algorithms? Should regulation, or further regulation, of algorithms be pushed or advocated for in specific contexts? What would this regulation look like, would it even be possible, and what effects would it cause?[30]

On the other hand, the extent to which we live in a world ruled by algorithms has to be assessed. We need to research not only the extent to which, given the ubiquity of algorithms, they regulate us in a sense, but also “what it would mean to resist them”.[31]

References

[1] This article is partially a recollection and account of the Governing Algorithms conference held at New York University on May 16-17, 2013.

[2] Gillespie, Tarleton (2013). “The Relevance of Algorithms”. Forthcoming in Media Technologies: Essays on Communication, Materiality, and Society, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/1-paper-gilles…

[3] http://governingalgorithms.org/

[4] Bowker, Geoffrey C. & Susan Leigh Star (1999). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: The MIT Press.

[5] Flew, Terry (2008) New Media: An Introduction (3rd Ed.). Oxford : Oxford University Press.

[6] Cardon, Dominique (2013). “Présentation”. Dossier Politique des algorithmes, Réseaux, 177 (1): 9-21.

[7] On the Internet’s “persistent memory” and the so-called “right to be forgotten”, championed by the EU in the recent past, see e.g. Beckles, C.-A. (2013). “Will the Right to Be Forgotten Lead to a Society That Was Forgotten?”, Privacy Perspectives, https://www.privacyassociation.org/privacy_perspectives/post/will_the_ri… or the critical Harris, L. (2013). “How to fix the EU’s ‘Right to be Forgotten’”, The Huffington Post, http://www.guardian.co.uk/technology/series/internet-privacy-the-right-t…

[8] Cardon (2013), p. 10.

[9] Latour, Bruno (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press , p. 229.

[10] Anderson, C. W. (2011). “Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience”. Journal of Communication, 5: 529-547.

[11] Striphas, Ted (2009). The Late Age of Print: Everyday Book Culture from Consumerism to Control. New York, NY: Columbia University Press.

[12] Gillespie (2013), p. 2.

[13] Ibid., pp. 2-3.

[14] Barocas, Solon, Sophie Hood & Malte Ziewitz (2013). “Governing Algorithms: A Provocation Piece”. Discussion Paper for the Governing Algorithms conference, NYU, May 16-17, 2013. Available at SSRN: http://ssrn.com/abstract=2245322 or http://dx.doi.org/10.2139/ssrn.2245322

[15] Cardon (2013), p. 11.

[16] Benkler, Yochai (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press. (pp. 33-35).

[17] Geiger, Stuart (2009). “Does Habermas Understand the Internet? The Algorithmic Construction of the Blogo/Public Sphere”. Gnovis: A Journal of Communication, Culture and Technology, 1 (10).

[18] Cardon (2013), p. 11.

[19] Smith, Dan (2013). “Google: Gatekeeper of the Internet’s Grey Area”. The Telegraph, June 10, 2013. Available at http://www.telegraph.co.uk/sponsored/technology/technology-trends/101039…

[20] Masnick, M. (2008). “Google As Benevolent Dictator: The Gatekeeper and the Data Collector”. TechDirt, December 2008. Available at http://www.techdirt.com/articles/20081201/0119292980.shtml

[21] Wu, Tim (2010). The Master Switch: The Rise and Fall of Information Empires. Random House Digital, pp. 279-280.

[22] This section is partly based on an article I wrote in French in March 2012: Musiani, Francesca (2012). “‘Bienvenue sur votre Amazon’: les systèmes de recommandation d’ouvrages”, Labs Hadopi. Available at http://labs.hadopi.fr/actualites/bienvenue-sur-votre-amazon-les-systemes…

[23] Benhamou, Françoise (2012). “3e étape de la stratégie verticale d’Amazon”. Blog L’Eco(nomie) des Livres, October 24, 2012. Available at http://www.livreshebdo.fr/weblog/l-eco%28nomie%29-des-livres-24/776.aspx

[24] Lemaire, Alexandre (2011). “Madame Machine, pouvez-vous me conseiller un bon livre? Les nouveaux outils Web de recommandation de lectures”. Association des Bibilothécaires de France, June 27, 2011. Available at http://bibliolab.fr/cms/content/les-nouveaux-outils-web-de-recommandation

[25] Linden, Greg, Brent Smith & Jeremy York (2003). “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”. IEEE Internet Computing, 7 (1): 76-80. Available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1167344&userType…

[26] http://en.wikipedia.org/wiki/Affinity_analysis

[27] Hardt, Moritz (2013). “Occupy Algorithms: Will Algorithms Serve The 99%?” Response Paper for the Governing Algorithms Conference, NYU, May 17, 2013. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/2-response-hardt.pdf

[28] Berry, David (2012). “The relevance of understanding code to international political economy”. International Politics, 49: 277–296.

[29] E.g. BBC News (2011). “Disappearing tycoon Souter blames Google”, September 12, 2011. Available at http://www.bbc.co.uk/news/technology-14884717

[30] Barocas, Hood & Ziewitz (2013), see supra note 14.

[31] Ibid.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Un regard critique sur les architectures décentralisées

Un article de 2012, par Arvind Narayanan, Solon Barocas, Vincent Toubiana, Helen Nissembaum and Dan Boneh, librement disponible en ligne sur la plateforme ArXiv, porte un “regard critique” sur les architectures de réseau décentralisées dans des contextes d’application qui impliquent un traitement des données personnelles.

Le problème central auquel se confronte le papier est la distance entre la promesse des architectures décentralisées, face à la centralisation progressive des fournisseurs de service, et le  manque d’application à large échelle, sauf exceptions, de ces architectures. En parallèle aux avantages, le papier discute les inconvénients de la décentralisation, qui restent, d’après les auteurs, souvent cachés sous la promesse d’une plus grande liberté et protection.

« …for all these efforts, decentralized personal data architectures have seen little adoption. This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. »

« for the most part decentralized social networking appears not to have anticipated the success of mainstream commercial, centralized social networks, but rather developed as a response to it. »

« we present some underappreciated drawbacks of decentralized architectures. Not all of these apply to all types of systems, nor is any of them individually a decisive factor. But collectively they may help explain why decentralization faces a steep road ahead, and why even if adopted, decentralization will not necessarily provide all the benefits that its proponents believe will automatically flow from it. »

« We hope to kick off a more tempered discussion of the future of personal data architectures in both scholarly and hobbyist/entrepreneurial circles, one that is informed by the lessons of history. There is much work to be done along these lines — application of economic theory can shed light on questions such as the relative strength of network effects in centralized vs. decentralized systems. Empirical methodology such as user and developer interviews would also be tremendously valuable. »

On est ravis d’apprendre qu’on est en train de contribuer à une démarche intéressante.  ;-)

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website