Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Offre de bourse de thèse CIFRE ‘Droit et technologies’

Offre de Bourse de Thèse  CIFRE

2014-2017

Droit et technologies

 

Laboratoire de recherche (Centre de recherche et d’étude en sciences administratives et politiques) Unité mixte Université Paris 2  & CNRS, Axe Gouvernance , Droit & Technologies de l’information

recherche un(e) étudiant(e) intéressé(e) par une bourse de thèse Cifre en Droit (droit public ou droit privé).

La thèse portera sur un sujet totalement nouveau : les questions juridiques posées par l’introduction de la « maquette numérique » (Directive INSPIRE) dans le secteur du BTP (BIM). Concrètement, l’observation et l’analyse se feront à partir de plateformes de partage d’informations ( textes, images, plans, …) entre les différents acteurs d’un projet d’architecture et/ou de travaux publics.

L’encadrement et l’environnement de cette thèse seront assurés conjointement dans/par les services juridiques d’entreprises  du BTP et au CNRS y coopérant dans le cadre du projet national MINND.

Cette proposition s’adresse à des étudiants curieux, intéressés par des sujets innovants mais elle n’exige pas de connaissances techniques. Travail collectif et missions de recherche au niveau européen prévues

Recrutement : novembre 2014

Renseignements : 06 86 67 34 30 ou 01 42 34 58 80 (secrétariat CERSA)

Entretien sur CV et lettre de motivation : Mi-Octobre 2014

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Production entre pairs, argent et valeur

Le numéro 4 du Journal of Peer Production, sur le thème de l’argent et de la valeur, vient d’être publié. Il est coordonné par Nathaniel Tkacz, Nicolas Mendoza et moi même, et inclut deux contributions par des membres du projet ADAM. Alexandre Mallard, Cécile Méadel et moi explorons le rôle performatif de la connaissance experte dans la construction d’une “confiance distribuée” pour le système de monnaie électronique décentralisé Bitcoin, tandis que Primavera De Filippi, en collaboration avec Miguel Said Vieira de l’université de São Paulo, propose un système de licences plus adaptée à l’économie des communs.

La totalité du numéro est librement disponible ici. Quelques extraits de l’introduction:

“Peer production has often been described as a ‘third mode of production’, irreducible to State or market imperatives. The creation and organisation of peer projects allegedly take place without ‘managerial commands’ or ‘price signals’, without recourse to bureaucratic apparatuses or the logic of competitive markets. Instead, and mimicking the technical architectures upon which many peer projects are based, production is described as non-hierarchical and decentralised. Group dynamics are also commonly described as ‘flat’ and this is captured, of course, in the very notion of the ‘peer’. When tested against the realities of actual projects, however, such early conceptions of peer production are, at best, in need of further elaboration and qualification. At worst, they were always off the mark. Hierarchies persist in peer production, as does competition and market-like arrangements. But perhaps it is the qualities of these new hierarchies and competitive forms that is novel. After all, liberal democracies, dictatorships, corporations, local sports clubs, and families all have their hierarchies but none is reducible to the others.

In the context of earlier understandings of peer production, the question of value and even more of currency has been rather marginal. This issue of the Journal of Peer Production (JoPP) demonstrates that theories and practices of value and currency are moving into the foreground. There has been a veritable explosion of experiments with currency and also a continuing metrics creep in many peer projects and beyond. More fundamentally, though, the question of value and how it circulates through a collective body is central to any mature theory of social organisation. In sociological and economic thought, the historical distinction between ‘values’ and ‘value’ split the non- or at least less-easily-calculable with the seemingly cold and objective world of calculation and universal commensurability. This ‘old settlement’, which never really held, nevertheless helped demarcate the economic from the social. But the intensification and extension of computational processes, manifested most clearly in the rise of big data, has lead to a proliferation of bottom-up procedures to formalise (social) values, rendering them easily calculable and lending order to the decentralised world of peers, but without necessarily replicating capitalistic calculations of value. […]

In this issue we seek to advance the exploration and understanding of how the themes of value and currency intersect peer production. This objective presented a double challenge for the contributors and for us as editors. Indeed, the scholarly articles included in this issue have attempted to provide analytical and theoretically grounded investigations of a world that is, on the one hand, often developing more quickly than the academic publication process can account for in a timely way, and on the other hand, mostly shaped by expert-practitioners. At the same time, these contributions seek to engage not only with scholars of related issues within the academic community, but also with practitioners themselves — who, on their end, have demonstrated a strong interest in this dialogue, as the invited comments section shows.”

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Les applications décentralisées de demain fondées sur le protocole de Bitcoin

Un article (en anglais) paru la semaine dernière sur Wired magazine:

Image: PerfectHue/Flickr

For many, bitcoin — the distributed, worldwide, decentralized crypto-currency — is all about money … or, as recent events have shown, about who invented it. Yet the actual innovation brought about by bitcoin is not the currency itself but the platform, which is commonly referred to as the “blockchain” — a distributed cryptographic ledger shared amongst all nodes participating in the network, over which every successfully performed transaction is recorded.

And the blockchain is not limited to monetary applications. Borrowing from the same ideas (though not using the actual peer-to-peer network bitcoin runs on), a variety of new applications have adapted the bitcoin protocol to fulfill different purposes: Namecoin for distributed domain name management; Bitmessage and Twister for asynchronous communication; and, more recently,Ethereum (released only a month ago). Like many other peer-to-peer (P2P) applications, these platforms all rely on decentralized architectures to build and maintain network applications that are operated by the community for the community. (I’ve written before here in WIRED Opinion about one example, mesh networks, which can provide an internet-native model for building community and governance).

Thus, while they enable a whole new set of possibilities, blockchain-based applications also presentlegal, technical, and social challenges similar to those raised by other P2P applications that came before them, such as BitTorrent, Tor, or Freenet. But some of these challenges haven’t been seen before in the context of traditional P2P networks.

The Bitcoin Protocol Is More ‘Cloud’ Than ‘P2P’

Although all blockchain-based applications are based on a decentralized network architecture, most of these applications distinguish themselves from standard P2P applications in at least two ways:

Users’ data (including personal data) are not stored locally into users’ devices. They subsist “in the cloud”, in the sense that they are hosted in a distributed database — the blockchain in this case — that is shared amongst all users in the network. This means that data is ubiquitous: It can be accessed at anytime and from anywhere, regardless of the user’s device. But the data is also more transparent: All actions or transactions performed by users are recorded on the blockchain and thus publicly available to everyone (although the identity of users can be kept secret and the content of such transactions can of course be encrypted).

Instead of being run locally, blockchain-based applications operate globally. They are deployed on the blockchain itself and are run — in a distributed manner — by relying on the resources provided by all users connected to the network. Although each client runs locally on the user’s device, these applications are constantly available, even when individual devices are turned off (as long as there are enough resources dedicated to them).

In this sense, blockchain-based applications are — in spite of their inherently decentralized nature —more similar to cloud-based services than traditional P2P applications.

However, these applications do significantly differ from traditional cloud-computing applications in that they are autonomous and independent from any central server or authority in charge of regulating or managing the network. Applications are run through an aggregate of individual, peer-to-peer clients that contribute their own resources to the network. In addition to being autonomous, the network is also more resilient and anonymous: no single point of failure, no single point of control.

We need to make sure we don’t exchange the tyranny of large online operators for the “tyranny of code” instead.

As such, the bitcoin platform (or blockchain) allows for the deployment of decentralized applications that combine the benefits of cloud computing — in terms of ubiquity and elasticity — with the benefits of P2P technologies in terms of privacy and anonymity. Even though the blockchain is inherently transparent (as every transaction is recorded on a public ledger), users can have multiple identities that don’t necessarily relate to their real persona.

Blockchain-based applications can therefore address user’s privacy through anonymity.

So What Are the Challenges?

In general, most challenges encountered by decentralized network applications are related to the limited availability of resources and the inherent difficulty of managing and coordinating them.

Long-term sustainability can only be achieved by providing an incentive for users to contribute to the network — for altruistic to selfish reasons — so that there are always a sufficient amount of resources available at any given time. In the case of decentralized applications featuring a specifically designed credit system (such as bittorrent) or assuming the function of a cryptocurrency (such as bitcoin, namecoin, and ethereum), this objective is much easier to achieve to the extent that these platforms provide an additional economic and/or utilitarian incentive for users to contribute to the overall operations of the network.

But blockchain-based applications raise important legal challenges, too. The challenges similar to those raised by traditional P2P networks is that the anonymity inherent in these networks supports or even encourages criminal behaviors and other illicit or reprehensible activities.

In previous decentralized networks, these issues were dealt with by establishing shared or distributed liability amongst all users connected to the network. Even though it’s often difficult to determine identity and assess the degree of responsibility each should be held accountable for, there are always specific individuals to blame. (Ultimately, the difficulty lies in assigning more or less responsibilities to one or more users in the network.)

So what happens when the figure of the “user” itself disappears; when the resulting P2P applications live outside a central authority? Who is liable and accountable? While we can borrow lessons learned from the world of previous P2P applications to respond to some of these challenges, it cannot be denied that blockchain-based applications raise new and important legal issues — and of a completely different kind than those found in traditional P2P architectures.

The Case of Ethereum and Applications Such as Smart Contracts and Distributed Autonomous Corporations

The case of Ethereum is particularly interesting in that its proponents envision the deployment of self-enforcing smart contracts – such as joint savings accounts, financial exchange markets, or even trust funds — as well as autonomous organizations that subsist independently of any moral or legal entity.

Ethereum is a contract validating and enforcing system based on a more sophisticated platform than other derivative cryptocurrencies (it features an internal Turing-complete scripting language that can be used to encode advanced transaction types directly into the blockchain).

As opposed to the other blockchain-based distributed applicationsdescribed above — from messaging to contracts — Ethereum can be regarded as a kind of distributed operating system: a platform allowing for new applications to be developed upon it, so as to eventually create self-validating contracts and autonomous systems that operate directly on the blockchain.

That’s the revolutionary feature of Ethereum. It’s also its potential problem.

Corporations and economic transactions are fundamentally driven by contracts. By providing the foundation to validate these contracts, Ethereum allows for the deployment of so-called distributed autonomous companies (DACs) or organizations (DAOs). These systems operate on the blockchain with an autonomy of their own. They earn money by charging users for the services they provide (in the example applications cited above, those services are DNS resolution and social networking) so that they can pay others for the resources they need (such as the processing power and bandwidth necessary to run the network).

As the name suggests, DAOs are autonomous entities that subsist independently from any legal or moral entity. After they have been created and deployed onto the internet, they no longer need (nor heed) their creators. Yes, they need to interact with their users, but they are not dependent on any one of them. Smart contracts are automatically enforced by the applications running over the blockchain.

Since operations are governed through this system of technical self-regulation, Ethereum introduces a whole new set of legal challenges regarding liability and law enforcement that haven’t been seen before in the context of traditional P2P networks. Indeed, if DAOs are independently operated — neither owned nor controlled by any given entity — who is actually in charge, responsible for, or accountable for their operations? And if their resources cannot be seized (because DAOs have full sovereignty over them), how can they be required to pay damages for their torts?

In the context of cloud computing at least, corporate authority is limited to the extent that online operators like Amazon, Google, or Facebook must abide to the basic tenets of law. In the case of Ethereum, the authority of the code cannot be questioned, nor can it be repealed by the law. In that sense these challenges are actually more similar to the issues emerging with the advent of autonomous agents – such as evolutionary software viruses or (though perhaps limited to the realm of science-fiction for now) intelligent robots with an autonomy on their own — than they are to traditional P2P applications.

Ethereum and other blockchain-based applications might well liberate us from the tyranny of large online operators. We just need to make sure that we don’t exchange that for the “tyranny of code”: rules dictated and automatically enforced by the underlying code of an online platform that only exists in the “ether”…

Histoire d’un ‘cloud P2P’

Alors que de nouveaux acteurs tels que Bitcloud et Maidsafe se présentent avec des solutions P2P sur la scène très peuplée du cloud, voici mon dernier article pour l’Internet Policy Review, qui raconte l’histoire d’un ‘cloud P2P’ d’il y a quelques années.

 

Decentralised internet governance: the case of a ‘peer-to-peer cloud’

13 February 2014

The architecture of a networked system is its underlying technical structure – its logical and structural layout. In my last article for the Internet Policy Review (Musiani, 2013a), I have built upon the work of several authors in science and technology studies, economics, law and computer science (e.g. Star, 1999; van Schewick, 2010; Elkin-Koren, 2006; Agre, 2003) to discuss the idea of network architecture as internet governance. I have suggested that, by changing the design of the networks subtending internet-based services and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

This article retraces the early stages of development of a ‘peer-to-peer cloud’ storage service, Drizzle, with the aim of providing an example of decentralised network architecture as internet governance ‘in practice’. More specifically, the paper sheds light on how changes in the architectural design of networked services affect the circulation, storage and privacy of data, as well as the rights and responsibilities exerted by different actors on them. This article does not mean to be a compendium of the implications of the decentralisation option in building a cloud platform, which entails a number of technical complications as well as advantages, including how to ensure the reliability and redundancy of data, and the soundness of the encryption mechanism. However, the privacy-related design choices described here are some of the many possible ways to illustrate the extent to which changes in network architecture are, indeed, changes to network governance.

Decentralising the cloud

In early 2007, when Drizzle first sees the light, the industry of online data storage – a service allowing users to store, save and share data on one or several terminals connected to the internet – has “never felt better” (Guerrini, 2010). Google, Amazon, Microsoft and Oracle, to name but a few, propose their storage platforms, each with its specificities and one common denominator: the ‘cloud’. According to this model, the service provider is in charge of both the physical infrastructure and the software. Thus, the service provider hosts applications and data at once – in a location, and according to modalities, unknown or at best ambiguous to the user (Mowbray, 2009). The so-called ‘server farms’ proliferate, to support and manage this increasing remoteness of data from users and users’ terminals.

In this context, Drizzle1, a small start-up founded by two developers and computer programmers who we will call Dietrich and Kurt, makes an unusual foundational decision: its cloud storage platform will mainly be composed – alongside more ‘classical’ data centres – of portions of the users’ hard disks, directly linked in a peer-to-peer, decentralised network architecture (Schollmeier, 2001; Taylor & Harrison, 2009). This choice entails a number of peculiar features. On the one hand, the implementation of a technical process defined as “encrypted fragmentation”2, which consists in encrypting locally – on the user’s computer, and by means of a previously installed Drizzle P2P client – the content that will be stored. The content is then divided into fragments, duplicated to ensure redundancy, and spread out to the network. In return, users need to accept to ‘pool’ – put at the disposal of other users and their computers – the computational and material resources necessary for the operations related to the storage of content. As the service’s terms of use point out:

“The user acknowledges that Drizzle may use processor, bandwidth and hard disk (or other storage media) of his computer for the purpose of storing, encrypting, caching and serving data that has been stored in Drizzle by the user or any other users. The user can specify the extent to which local resources are used in the settings of the Drizzle client software. The amount of resources the user is allowed to use in Drizzle depends on the amount of local resources the user is contributing to Drizzle.”

The interdependent and egalitarian model subtending the platform will allow its users to barter their local disk space with an equivalent space in the decentralised cloud, thereby improving the quality of this storage space, which will become permanently available and accessible. By shaping their decentralised storage service, the developers of Drizzle carry on a double experimentation: with the frontier between centralisation and decentralisation, and with sharing modalities that blend peer-to-peer, social networking and the cloud.

Peer-to-peer storage: the cloud meets privacy by design

“In 2007, it was all starting to get social,” Dietrich recalls three years later. Indeed, social media, Facebook and Twitter in particular, were at that moment entering the daily life of millions of internet users in an increasingly pervasive way. Drizzle’s first steps are taken in a community of research and development that tries to counter the social media “explosion” by developing P2P systems as an alternative to a variety of internet-based services, including social networks,  structured in a centralised manner (Le Fessant, 2009; Musiani, 2010a; Musiani, 2010b).

In 2007, Facebook had been in existence for three years. Millions of users had taken part in it, thereby contributing to the massive success of these Web-based services that allow individuals to build a public or semi-public profile within a system, define a list of other users with whom to interact, and see/browse the list of their and others’ connections made in ‘public mode’ within the system (Boyd & Ellison, 2007). In parallel to their spectacular growth, social networks raise vibrant discussions and controversies, both within the expert community and among the general public. The ways in which social networking service providers leverage personal information and user data remains controversial, since they sometimes mean allowing external applications to access them, while on other occasions they pursue direct commercial purposes (Boyd, 2008). The rise of the so-called cloud does nothing to mitigate the impression of risk for informed users, as applications and data are increasingly hosted in locations and ways unknown or at best ambiguous. User exposure on social networking sites and on cloud-based services positions privacy, more than ever, at the foreground of discussions.

In this context, several developers – including Drizzle’s – identify in a peer-to-peer type of network architecture a possible way of approaching the protection of personal data privacy with a different angle: through the relocation and “re-appropriation” of data within the terminals of users, who would be able to host their own profiles and the information they contain (see also Moglen, 2010; Aigrain, 2010, 2011).

As in the development of Drizzle, a conception of privacy and confidentiality of personal data, which is conceived of and enforced via technical means – called privacy by design (Cavoukian, 2010; Schaar, 2010), is at work. This conceptualisation of privacy is defined by means of the constraints and the opportunities linked to the treatment and the location of data, according to the different moments and the variety of operations taking place within the system. In particular, the confidentiality of data (personal data as well as the content stored in the P2P cloud) is defined by a peculiar role and enhanced features attributed to the password that identifies the user vis-à-vis the network, and by the implementation of the resource allocation system on which Drizzle is based.

Password and user responsibility

In Dietrich’s intentions, the role of the user-selected and user–generated password for the Drizzle system should have “stri[cken] the user as soon as he had access to the system for the very first time.” Indeed, the virtual form that is served to users upon subscription may come as a surprise: it informs that

“We do not know your password as it never leaves your computer. Please, do not forget your password and use, if needed, your password hint.”

The status of the password is thus negotiated, beyond its usual meaning of unique identifier vis-à-vis the system, to define, detail and legitimise the process of local encryption and decryption of data within the Drizzle system. This feature comes to symbolise the specificity of Drizzle’s promise of security and privacy as well as users’ trust, as it becomes the symbol and the graphical representation of the ‘local’ dimension of the encryption process – as it never leaves the computer of the user who created it. The operations, for the most part automatically managed, that are linked to the protection of personal data are thus hosted on the terminals of users. Indeed, this entails a modification of the user’s role within the service’s architecture: node among equal nodes, it becomes a server itself, instead of a starting point and a final point for operations that are otherwise conducted on another machine or group of machines.

Through the attribution of this status to the password, the developers of Drizzle are also proposing an alternative to the balance between the rights exerted by users on their own data and the rights acquired by the service provider on these same data – a balance that is usually heavily bent on the provider’s side. However, this reconfiguration in the balance of rights comes with a trade-off. As the password stays with the user and is not sent to the servers controlled by the firm, the latter cannot retrieve the password if needed. Thus, users do not only see their privacy reinforced, but at the same time and for the same reasons, the responsibility for their actions is augmented – while the service provider renounces to some of its control on the content that circulates thanks to the service it manages. The meaning of this ‘renunciation’, Dietrich explains, is double: on the one hand, the Drizzle team wishes to make it evident, almost translate into a specific object the user can easily relate to, the ‘obscure’ and unfamiliar process of client-side encryption, which is an ongoing source of controversies and perplexities. On the other hand, it is also a matter of Drizzle’s business model: the more the firm knows about its users, the more it is mandatory for it to submit the users to regular surveillance and control – and this requires an investment of material resources and time that, in its first phases of existence, the firm does not have:

“If we can know what is in your account, starting with your password, we have heightened obligations to police the content and to make sure nobody can eavesdrop on the traffic.”

Data privacy and resource allocation

Another aspect that contributes to define rights and responsibilities is the detailing of the conditions for allocation and management of the computational resources provided by the different computers participating in the system.

As briefly described above, the choice to decentralise the platform makes it necessary, due to the very particular status of the resources used by the system, to detail several aspects in the terms of use: the role of computers belonging to users, the types of resources that Drizzle is able to use, their purpose. It also becomes necessary to detail the extent to which users are able to decide – and communicate to their P2P client, thus to the system – the maximum quantity of local resources that the rest of the network/storage system can use. However, it is also necessary to define the articulation between the availability of resources and the different operations to which these resources will be destined to within the system.

The articulation of these two aspects has important implications for the confidentiality of data circulating in the system (both personal information and content stored by users). Several users, giving feedback to the developers in the early stages of the system, warn that the resource allocation process could be framed as a possible ‘surveillance’ or ‘monitoring’ of these resources, in a way that can potentially be highly automatised, invasive, privacy-threatening.

After a discussion between these concerned users and the developers, via the Drizzle forum, two modifications were applied to the terms of use: while the general terms now state that “resources are allocated and monitored in accordance with the Privacy Policy,” the privacy policy itself details the extent of automation and pervasiveness of the system that allocates and monitors resources:

“In order to ensure a fair allocation of resources within Drizzle, various data about the computers participating in the Drizzle network is collected. This data includes their IP addresses, disposability and the amount of resources they are contributing (e.g. bandwidth, memory). […] Drizzle keeps track of how much storage space you have used and earned […] Drizzle collects statistical information for the purposes of monitoring, debugging and improving the system. This includes automatically generated problem, performance, network analysis and general usage reports, as well as logs of the connections and queries made to Drizzle’s servers (including the involved IP addresses), as well as analytical data about the usage of the Drizzle website. However, none of this data contains information from your private or shared files.”

Thus, the correct functioning of the allocation system indeed implies the gathering of several pieces of information concerning the material, computational and memory resources pooled by each participating computer. The pooling of the storage equipment (i.e., users’ local resources, made available by each of them) is necessary for the system to work; however, it is not meant to imply an intrusion in the stored content itself, which remains protected by the local safeguard of the password and the encryption of content. The collection of information, the developers of Drizzle affirm, has the purpose of automatically computing the storage space made available by each user – and, as we have analysed elsewhere (Musiani, 2013b), of establishing the extent to which each user can reclaim her place in the ‘P2P cloud’, an equivalent storage space in the network of participating users.

Conclusions

The development of Drizzle’s ‘peer-to-peer cloud’ allows to observe how changes in the architectural design of networked services affect data circulation, storage and privacy – and in doing so, reconfigure the articulation of the ‘locality’ and the ‘centrality’ in the network (Akrich, 1989: 39), suggesting a model of decentralised governance “by architectural design” for the service.

Ultimately, decentralising the cloud leads to a reformulation and ‘re-balancing’ of the relationship between the user and the service provider. The local, client-side encryption of data first, and its fragmentation afterwards – both operations conducted within the P2P client installed by the user, and entirely taking place on his terminal – are proposed by Drizzle as evidence that the firm, in its own words, “does not even have the technical means” to betray the trust of users.

In particular, this conception of privacy by design takes shape around the password, that remains locally stored in the user’s P2P client and unknown to the service provider. In doing so, it becomes a form of disengagement of the service provider with respect to security issues, its ‘auto-release’ from responsibility: a detail whose importance may seem small at first, but eventually leads to changes in the forms of technical solidarity (Dodier, 1995) established between users and service provider.

For the purpose of this article, I have focused in particular on aspects such as the strengthening of privacy by design and the increase in responsibility attributed to the user, arguably among the “positive” aspects of a peer-to-peer cloud. However, it should be pointed out that an important part of the decentralisation choice made by the Drizzle team has involved assessing its possible downsides: reliability and redundancy of data, slow downloading performances, soundness of the encryption mechanism, and – no less important – the perception of these issues by users. A heated discussion among developers, and between developers and some pioneer users, also occurred on the topic of the ‘legality’ of the system, especially in jurisdictions such as that of the United States. All of these are complex issues and most of them could not be accounted for here – it has been done in a much more detailed manner elsewhere (Musiani, 2013: 123-173), by analysing, with tools derived from the field of science and technology studies (STS), a number of socio-technical controversies related to the development of the platform. However, the privacy-related dynamics provided here are a few of the several possible ways to flesh out the extent to which changes in network architecture are, indeed, changes in network governance.

The example of Drizzle has illustrated in practice the implications of ‘architectures as governance’ we had introduced in the previous article: the repartition of competences and responsibilities between service providers, content producers, users and network operators; the articulation between the individual and the collective; the shaping of user rights and ‘community’ norms; the definition of ‘contributor’ in internet-based services. In light of Edward Snowden’s leaks about certain surveillance practices by the US National Security Agency, the potential of architectural choices – choices that would make the internet less centralised and more distributed – as a means of de facto privacy advocacy and promotion of decentralised governance has never been more evident. The goal, as The New Yorker recently reported, “isn’t to end surveillance, but to make it harder to do en masse” (Kopstein, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Aigrain, P. (2011). “Another Narrative. Addressing Research Challenges and Other Open Issues session.” PARADISO Conference, Brussels, 7–9 Sept. 2011.

Akrich, M. (1989). “De la position relative des localités. Systèmes électriques et réseaux socio-politiques.” Cahiers du Centre d’Études pour l’Emploi, 32 : 117-166.

Boyd, D. (2008). “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence, 14 (1).

Boyd, D. & Ellison, N. (2007). “Social Network Sites: Definition, History, and Scholarship.” Journal of Computer-Mediated Communication, 13 (1).

Callon, M., Lascoumes, P. & Barthe, Y. (2001). Agir dans un monde incertain. Essai sur la démocratie technique, Paris: Seuil.

Cavoukian, A. (eds., 2010). Special Issue: Privacy by Design: The Next Generation in the Evolution of Privacy. Identity in the Information Society, 3(2).

Dodier, N. (1995). Les Hommes et les Machines. La conscience collective dans les sociétés technicisées. Paris: Métailié.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Guerrini, Y. (2010). “Wuala : le P2P comme solution de stockage.” http://www.presence-pc.com/actualite/Wuala-stockage-cloud-P2P-39035/#xtor=RSS-11

Kopstein, J. (2013). “The mission to de-centralize the Internet.” The New Yorker, 13 December 2013, http://www.newyorker.com/online/blogs/elements/2013/12/the-mission-to-decentralize-the-internet.html

Le Fessant, F. (2009). “Les réseaux sociaux au secours des réseaux pair-à-pair.” Défense nationale et sécurité collective, 3 : 29-35.

Moglen, E. (2010). “Freedom in the Cloud: Software Freedom, Privacy and Security for Web 2.0 and Cloud Computing.” Keynote, ISOC Meeting, New York Branch, 5 February 2010.

Mowbray, M. (2009). “The Fog over the Grimpen Mire: Cloud Computing and the Law.” SCRIPTed, 6(1): 132-146.

Musiani, F. (2013a). “Network architecture as internet governance.” Internet Policy Review, 24 October 2013, http://policyreview.info/articles/analysis/network-architecture-internet-governance

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris : Presses des Mines.

Musiani, F. (2012). “Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology.” Journal of Peer Production, 1.

Musiani, F. (2010). “Ménager le droit à la vie privée, entre anonymat et connaissance de l’identité: les débuts des réseaux sociaux en pair-à-pair.”  Terminal, 105: 107-116.

Musiani, F. (2010b). “When Social Links Are Network Links: the Dawn of Peer-to-Peer Social Networks and Its Implications for Privacy.” Observatorio, 4(3), 185-207.

Schaar, P. (2010). “Privacy by Design.” Identity in the Information Society, 3(2): 267-274.

Schollmeier, R. (2001). “A definition of peer-to-peer networking for the classification of peer-to-peer architectures and applications.” Proceedings of the First International Conference on Peer-to-Peer Computing, 27–29.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

Taylor, I. & Harrison, A. (2009). From P2P to Web Services and Grids: Evolving Distributed Communities. Second and Expanded Edition. London: Springer-Verlag.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press

Vinck, D. (Ed., 2003). Everyday Engineering. An Ethnography of Design and Innovation. Cambridge, MA: The MIT Press.

Footnotes

1. The name is fictitious (‘light rain’) and recalls the fragmentation and the distribution of data in the system’s storage mechanism. The names of the developers are pseudonyms, as well. I have no direct interest in Drizzle – I use it as a case study of a possible ‘decentralisation of the cloud’.

2. Unless otherwise noted, citations are derived from in-depth interviews with the developers of Drizzle, conducted within a period of online and ‘live’ ethnography of Drizzle’s development, design and innovation process (see Vinck, 2003) between 2010 and 2011.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Un guide aux “technologies discrètes” de la gouvernance de l’Internet

9780300181357Les lecteurs du blog ADAM pourraient être intéressés par ma note de lecture du récent livre de Laura DeNardis, The Global War for Internet Governance (en anglais), également disponible sur le site américain d’Amazon.

A Guide to the “Technologically Concealed” in Internet Governance, January 21, 2014

Book Review: Laura DeNardis (2014). The Global War for Internet Governance. New Haven, CT and London : Yale University Press.

The final draft of Laura DeNardis’s most recent book, officially released on January 14th, 2014, had most likely been finalized before Edward Snowden’s recent revelations about the pervasive surveillance implemented by the U. S. National Security Agency entered the media spotlight, which explains the absence of direct references to the controversy throughout the 300-page volume. Yet, because of the Snowden revelations and a number of other issues addressed thoroughly in this extremely important book – from WikiLeaks to the SOPA and PIPA bill projects – the exploration of Internet governance (IG) issues through a “global war” lens has never been more relevant than it is today. Information and communication technologies, the Internet first and foremost, are increasingly mobilized to serve broader economic, political and military aims, ranging from the theft of strategic data to the hijacking of industrial systems. The rise of techniques, devices and infrastructures destined to digital espionage, data collection and aggregation, tracking and surveillance is highlighted not only by the recent Snowden revelations, but also by the construction and the organization of a dedicated, increasingly widespread and lucrative market.

As an interdisciplinary scholar of Internet governance grounded in Science and Technology Studies (STS) myself, I had been very much looking forward to the release of this book, which will undoubtedly prove to be a central reference for Internet governance as an emerging field of study in the coming years. As she had already and so ably done in Protocol Politics (2009), Laura DeNardis builds on her interdisciplinary training as an information engineer and STS scholar to untangle – and richly account for – the “technologically concealed and institutionally complex ecosystem of governance” that permeates today’s Internet. In doing so, she contributes to unveil what media and policymaker accounts of Internet governance all too often cause to stay out of the public radar.

While this book is also, indirectly, a means to revisit and update the debates on “multistakeholderism”, prominent in some IG venues such as the Internet Governance Forum, the author’s main interest rests unambiguously with the “technologically concealed” and its socio-technical agency: the extent to which non-human actors (to put it in STS vocabulary) such as information intermediaries, critical Internet resources, Internet exchange points and security devices play a crucial “governance” role alongside political, national and supra-national institutions and civil society organizations. Throughout the book, Laura DeNardis explores how Internet governance takes shape in the myriad of infrastructures, devices, data fluxes and technical architectures that – discreet, often invisible, yet no less crucial – subtend and build the increasingly public and articulate “network of networks”.

The author’s conceptual framework is shaped by five core research questions: how arrangements of technical architecture are, inherently, arrangements of power and politics, which can be revealed by bringing infrastructures to the foreground; how “traditional power” structures are increasingly mobilizing Internet governance technologies as proxies for content control; how Internet governance is increasingly privatized, enacted by corporations and non-governmental entities, in areas as diverse as privacy, control of online financial flows, censorship, and copyright enforcement; how decisions implemented within technical spaces on the Internet reflect conflicts over competing sets of values, rights, policy norms, as well as ongoing negotiations of the values subtending Internet architecture; and finally, how the variety of “local Internets” and the stability of the global Internet intersect and mutually influence each other, calling for a “carefully planned global governance framework” (p.18), a luxury that the rapid pace of innovation, the impressive scaling, and the diversification of uses have almost never allowed to Internet architecture in its global era.

This five-pronged framework opens the door to Laura DeNardis’s exploration and narrative of Internet governance as an ensemble of controversies and battles over “control points”, a narrative which constitutes the remainder of the book. These control points range from the deepest layers of Internet infrastructure to the “last mile” of user access to the network; from the blocking of financial flows to the deliberate “kill-switches” of Internet-based services; from the “graduated response” termination of domestic Internet access to the attempted use of the Domain Name System for copyright enforcement purposes; from the Internet’s backbone infrastructure to the establishment of interconnection agreements; and finally, the de facto public policy role assumed by private information intermediaries in the variety of instances where they gather, collect, aggregate, select, present data to users and to other actors of the Internet value chain — thereby enacting governance over privacy, freedom of expression, cultural diversity and reputation.

The author’s training as an engineer provides the background and the tools for an exploration of Internet governance that I have described elsewhere as “not afraid of its subject of study” (Musiani, 2012): able to resist the temptation of an excessive “institutionalization” of IG, to avoid recoiling from the dense, intricate, complex, technically-grounded substrate of Internet governance power struggles, and to embrace the challenge of accounting for it in a detailed yet engaging way. While the methodological toolbox and narrative devices of STS are, unambiguously, precious instruments for the author, enabling her to achieve these objectives in a successful manner, this book is not “blatantly” STS. The vocabulary of actor-network, delegation, black boxes, co-production is there as a means, not an end in itself; the references to Geoffrey Bowker and Susan Leigh Star’s work on standards (1996), to Bruno Latour’s musings on technical mediation (1994), to Michel Callon’s sociology of translation (1986), to Tarleton Gillespie’s “politics of platforms” (2010) are tools, not enumerations of the obligatory literature review; the description of socio-technical controversies is ever-present, but weaved discreetly into the narrative.

As Jeanette Hofmann once wrote, Internet governance is a “regulative idea in flux” (2007). Indeed, the search for concepts, tools and categories to make sense of 21st century Internet governance, both as a set of practices and technologies and an academic field of study, is very much open-ended, unresolved and problematic. The conclusion of The Global War for Internet Governance ties together beautifully the variety of “stress factors” that Internet control points will likely keep on being subjected to in the immediate future: increasing international pressure to introduce additional regulation at interconnection points; greater governmental control; technology-embedded threats to privacy; reduction of anonymity and its consequences for freedom of expression; loss of platform interoperability; and finally, “creative” uses and misuses of Internet infrastructure and their impact on the Internet’s security and stability. In this sense, Laura DeNardis’s work is indeed a blueprint for an infrastructure- and architecture-based “Bill of Rights” for the Internet — and extremely interesting, required reading in order to understand more thoroughly the indispensible “backstage” of today’s highly-mediatized Internet politics.

References

Bowker, Geoffrey C. and Susan Leigh Star (1996). “How Things (Actor-Net)work: Classification, Magic and the Ubiquity of Standards”, Philosophia, November 18; 1996.

Callon, Michel (1986). “Elements of a Sociology of Translation”, in John Law (ed.), Power Action and Belief: A New Sociology of Knowledge?, London: Routledge.

DeNardis, Laura (2009). Protocol Politics: The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Gillespie, Tarleton (2010). “The Politics of ‘Platforms’”, New Media and Society, 12 (3)

Hofmann, Jeanette (2007). “Internet Governance: A Regulative Idea in Flux”, in Ravi Kumar, Jain Bandamutha (eds.), Internet Governance: An Introduction, Hyderabad: The Icfai University Press, pp. 74-108.

Latour, Bruno (1994). “On Technical Mediation”, Common Knowledge, 3 (2): 29-64.

Musiani, Francesca (2012). “Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology”, Journal of Peer Production, 1.

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Il est temps de prendre les réseaux “mesh” au sérieux..

Un article (en anglais) paru la semaine dernière sur Wired magazine:

Nets of Freedom creating mesh networks. Image: Strelka Institute / Flickr

The internet is weak, yet we keep ignoring this fact. So we see the same thing over and over again, whether it’s because of natural disasters like hurricanes Sandy and Katrina, wars like Syria and Bosnia, deliberate attempts by the government to shut down the internet (most recently in Egypt and Iran), or NSA surveillance.

After Typhoon Haiyan hit the Philippines last month, several towns were cut off from humanitarian relief because delivering that aid depends on having a reliable communication network. In a country where over 90 percent of the population has access to mobile phones, the implementation of an emergency “mesh” network could have saved lives.

Compared to the “normal” internet — which is based on a few centralized access points or internet service providers (ISPs) — mesh networks have many benefits, from architectural to political. Yet they haven’t really taken off, even though they have been around for some time. I believe it’s time to reconsider their potential, and make mesh networking a reality. Not just because of its obvious benefits, but also because it provides an internet-native model for building community and governance.

But first, the basics: An ad hoc network infrastructure that can be set up by anyone, mesh networks wirelessly connect computers and devices directly to each other without passing through any central authority or centralized organization (like a phone company or an ISP). They can automatically reconfigure themselves according to the availability and proximity of bandwidth, storage, and so on; this is what makes them resistant to disaster and other interference. Dynamic connections between nodes enable packets to use multiple routes to travel through the network, which makes these networks more robust.

Compared to more centralized network architectures, the only way to shut down a mesh network is to shut down every single node in the network.

That’s the vital feature, and what makes it stronger in some ways than the regular internet.

But mesh networks aren’t just for political upheavals or natural disasters. Many have been installed as part of humanitarian programs, aimed at helping poor neighborhoods and underserved areas. For people who can’t afford to pay for an internet connection, or don’t have access to a proper communications infrastructure, mesh networks provide the basic infrastructure for connectivity.

Not only do mesh networks represent a cheap and efficient means for people to connect and communicate to a broader community, but they provide us with a choice for what kind of internet we want to have.

For these concerned about the erosion of online privacy and anonymity, mesh networking represents a way to preserve the confidentiality of online communications. Given the lack of a central regulating authority, it’s extremely difficult for anyone to assess the real identity of users connected to these networks. And because mesh networks are generally invisible to the internet, the only way to monitor mesh traffic is to be locally and directly connected to them. 

But the Real, Often Forgotten, Promise of Mesh Networks Is…

Yet beyond the benefits of costs and elasticity, little attention has been given to the real power of mesh networking: the social impact it could have on the way communities form and operate.

What’s really revolutionary about mesh networking isn’t the novel use of technology. It’s the fact that it provides a means for people to self-organize into communities and share resources amongst themselves: Mesh networks are operated by the community, for the community. Especially because the internet has become essential to our everyday life.

Instead of relying on the network infrastructure provided by third party ISPs, mesh networks rely on the infrastructure provided by a network of peers that self-organize according to a bottom-up system of governance. Such infrastructure is not owned by any single entity. To the extent that everyone contributes with their own resources to the general operation of the network, it is the community as a whole that effectively controls the infrastructure of communication. And given that the network does not require any centralized authority to operate, there is no longer any unilateral dependency between users and their ISPs.

Mesh networking therefore provides an alternative perspective to traditional governance models based on top-down regulation and centralized control.

Indeed, with mesh networking, people are building a community-grown network infrastructure: a distributed mesh of local but interconnected networks, operated by a variety of grassroots communities. Their goal is to provide a more resilient system of communication while also promoting a more democratic access to the internet.

Are We There Yet?

In recent years, different mesh network initiatives have emerged to address the above and other objectives that could be accomplished with mesh networking.

For instance, in face of the damages caused to Haiti’s communication infrastructure by the 2010 earthquake, the Serval project was launched in Australia with the objective to create a disaster-proof wireless network that relies exclusively on the connectivity of mobile devices.

Meanwhile, after the Egyptian government attempted to shut off the internet in the whole nation, the Open Mesh Project emerge with the goal of providing open and free communications to every citizen in the world regardless of national boundaries.

Finally, there is the Open Technology Institute’s (an initiative of the New America Foundation) Commotion Wireless project. Originally aimed at providing a secure and reliable platform to prevent authoritarian regimes from controlling or blocking dissident or activist communications, it has so far only been deployed in confined areas where the communication infrastructure was either damaged or missing.

So why hasn’t mesh networking already taken off?

Well, there are technical reasons of course. The complexity to set up, manage, and maintain a mesh network is one obstacle to their widespread deployment. Getting a mesh network to work properly can be harder than it seems, especially when it comes to latency. Although the technology is there, routing protocols are currently unable to scale over a few hundred nodes and network coverage is constrained by the limited range of wireless user devices.

Another barrier is perception (and marketing). Mesh networks are generally seen as an emergency tool rather than a regular means for communication. While many mesh networks have been deployed during a period of crisis (during the Boston marathon bombing for example) or after standard communication infrastructures have been damaged or destroyed (such as the Redhook initiative in Brooklyn), very few have been deployed beforehand. They’re used more as an ad hoc measure than a precautionary one that could provide an alternative and more resilient network infrastructure.

Finally, there are political and power struggles, of course. Even though mesh networking could theoretically support the government in providing internet connectivity to poor neighborhoods or undeserved areas, mesh networks cannot be easily monitored, nor properly regulated by third parties. As such, mesh networks are sometimes regarded by the state as a potential danger — one that could disrupt public order by providing a platform for criminal activities.

The same is true of the private sector. For large ICT companies (including mobile operators and ISPs) mesh networking constitutes a new competitor in the market for internet communication, which — if it were more widely deployed — could potentially jeopardize their traditional business model based on pay-per-use and monthly subscriptions. Whether nefariously or simply because of structural circumstances, these actors are all committed to maintaining the status quo of the current internet ecosystem.

* * *

The problem is that we are focusing too much on the technical and legal challenges of mesh networking as opposed to the social benefits it might bring in terms of user autonomy and community-building. Or have we not yet realized that we have finally reached a competitive point in communications where we can deploy more than one internet? Instead of trying to create one perfect network that will satisfy us all, we can, instead, choose between several networks to find the one that best suits us.

As has been done with Freifunk in Germany and GuiFi.net in Spain, more mesh networks need to be deployed on an arbitrary basis. This will help establish the basic infrastructure necessary to ensure the autonomy and long-term sustainability of a community-based network structure. One that, in any kind of situation, can connect people and even save lives.

But beyond the internet, the governance model of many community wireless networks could potentially translate into other parts of our life. By promoting a DIY approach to network communications, mesh networking represents an opportunity to realize that it can sometimes be more beneficial for us, as a community, to rely on our own resources and those of our peers than that of centralized authorities. It’s bringing the principles of the internet to our physical lives.

Architectures de réseau et (comme) gouvernance d’Internet

Cet article, en anglais, est paru en octobre 2013 sur l’Internet Policy Review. Merci à Frédéric Dubois, Uta Meier-Hahn, Andrej Savin et Rikke Frank Joergensen pour leur relecture et leurs commentaires.

 

Network architecture as internet governance

The architecture of a networked system is its underlying technical structure, designed according to a “matrix of concepts” (Agre, 2003). It constitutes the logical and structural layout of a system, including transmission equipment, communication protocols, infrastructure, and connectivity between its components or nodes. This article introduces the idea of network architecture as internet governance1, and more specifically, it outlines the dialectic between centralised and distributed architectures, institutions and practices, and how they mutually affect each other.

Technical architectures, as argued by several authors discussed in this article, may be understood as alternative ways of influencing economic systems, sets of rules, communities of practice – indeed, as the very fabric of user behaviour and interaction. The status of every internet user as consumer, sharer, producer and possibly manager of digital content is informed by, and shapes in return, the technical structure and organisation of the services she has access to. It is in this sense that network architecture is internet governance: by changing the design of the networks subtending internet-based services, and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

Architecture, “politics by other means”

“Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change,” once wrote science and technology studies (STS) scholar Susan Leigh Star (Star, 1999, p. 339). Indeed, the history of internet innovation suggests that the shaping of technical architectures populating the network of networks is, in the words of philosopher Bruno Latour, “politics by other means” (Latour, 1988, p. 229). The ways in which architecture is politics, protocols are law, code shapes rights (e.g., Lessig, 1999; DeNardis, 2009), are explored today by a number of different authors in relation to networked and online media; in particular, internet-related research has contributed to foster the debate on the intersection and overlap of governance by architecture with other forms of governance. This section, while not pretending to be exhaustive, discusses some key approaches to the question.

Interested in the relationship between architectures and the organisation of society, Terje Rasmussen (2003) has argued that there is a structural match between the development of the technical model of the internet (such as packet switching and distributed routing) and the transformation of the societies in which it operates. In this account, the technical infrastructure of the Internet suggests that ours is a distributed society, based on the ability to handle risk, rather than on central control. On the other hand, information studies scholar and internet pioneer Philip Agre suggests that “Decentralized institutions do not imply decentralized architectures, or vice versa. […] Architectures and institutions inevitably coevolve, and to the extent they can be designed, they should be designed together” (Agre, 2003, p. 42), but they are not “naturally” related.

IT law scholar Barbara van Schewick seeks to examine how changes, notably design choices, in internet architecture affect the economic environment for innovation, and evaluates the impact of these changes from the perspective of public policy (2010, p. 2). According to her, this is a first step towards filling a gap in how scholarship understands innovators’ decisions and the economic environment for innovation. After many years of research on innovation processes, we understand how these are affected by changes in laws, norms, and prices; yet, we lack a similar understanding of how architecture and innovation impact each other, perhaps for the intrinsic appeal of architectures as purely technical systems (ibid., p. 2-3). Traditionally, she concludes, policy makers have used the law to bring about desired economic effects. Architecture de facto constitutes an alternative way of influencing economic systems, and as such, it is becoming another tool that actors can use to further their interests (ibid., p. 389).

The relationship between architecture and law-making for networked media has been an increasingly central interdisciplinary preoccupation since the late 1990s/early 2000s. Early uses of the metaphor “code is law” can be found in William Mitchell’s City of Bits (1995) and in Joel Reidenberg’s article on lex informatica, the formation of information policy rules through technology (1998). However, legal scholars Yochai Benkler and Lawrence Lessig have arguably been the “scene-setters” in this field, with their work on sharing as a paradigm of economic production in its own right (2004) and technical architecture as politics (1999), respectively. While the former argued for the rise of a “networked information economy” as a system of “production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means” (Benkler, 2006), the latter introduced technical architecture as one out of the four main (and interconnected) society regulators, the other three being law, market and norms. The application of this principle to the text of computer programmes led to what remains, perhaps, the most striking incarnation of the famous “code is law” label (Lessig, 1999).

Among the scholars that have since been inspired by this line of inquiry, Niva Elkin-Koren is especially relevant. In her work (e.g., 2006, 2012), architecture is understood as a dynamic parameter in the reciprocal influences of law and technology design, in the field of information and communication systems. The interrelationship between law and technology often focuses on one single aspect, the challenges that emerging technologies pose to the existing legal regime, thereby creating a need for further legal reform; however, the author argues, juridical measures involving technology both as a target of regulation and as a means of enforcement should take into account that the law does not merely respond to new technologies, but also shapes them and may affect their design (Elkin-Koren, 2006).

The work of Tim Wu adds layers to the conceptualisation of code’s relationship with law, moving from Lessig’s concept that computer code can substitute for law or other forms of regulation, to code as an anti-regulatory mechanism tool that certain groups will use to their advantage to minimise the costs of law – the possibility of “using code design as an alternative mechanism of interest group behavior” (Wu, 2003).

Architecture and the future(s) of the internet

The current trajectories of innovation for the internet are making it increasingly evident by the day: the evolutions (and in-volutions) of the network of networks are likely to depend in the medium-to-long term on the topology and the organisational/technical model of internet-based applications, as well as on the infrastructure underlying them (Aigrain, 2011).

This is illustrated by what has been this author’s main research focus over the past few years: the development of internet-based services – search engines, storage platforms, video streaming applications – based on decentralised network architectures (Musiani, 2013b).

The concept of decentralisation is somehow shaped and inscribed into the very beginnings of the internet – notably in the organisation and circulation of data packets – but its current topology integrates this structuring principle only in very limited ways (Minar & Hedlund, 2001). The limits of the concentrated and centralised urbanism of the internet, which has been predominant since the beginning of its commercial era and its appropriation by the masses, are sometimes highlighted by the same phenomena that has contributed to its widespread success, as best illustrated by social media (Schafer, Le Crosnier & Musiani, 2011). Examples of incidents caused by “excessive concentration” are, for example, the global consequences of the Pakistani YouTube re-routing in 2008 or the repeated failures of Twitter infrastructure (e.g., in 2012). These incidents have put into the spotlight some of the possible limits of the concentration model: excessive control, technical and/or legal, by a single commercial entity; the opaqueness of the modalities of this control vis-à-vis the users; the vulnerability to single-point failures of centralised architectures.

While internet users have become, at least potentially, not only consumers but also distributors, sharers and producers of digital content, the network of networks is structured in such a way that large quantities of data are centralised and compressed within large data centers and server farms. At the same time, such data is most suited to a rapid re-diffusion and re-sharing in multiple locations of a network that has now reached an unprecedented level of globalisation. The current organisation of internet-based services and the structure of the network that enables their delivery – with its mandatory passage points, places of storage and trade, required intersections – raises many questions, in terms of the optimised utilisation of resources, the fluidity, rapidity and effectiveness of electronic exchanges, the security of exchanges, the stability of the network.

Beyond technology, these questions are deeply social and political, and affect the “ramifications of possibles” (Gai, 2007) the internet is currently facing for its close future. Resorting to decentralised architectures and distributed organisational forms, constitutes a different way to address some issues of management of the network, in a perspective of effectiveness, answer to vulnerabilities, digital “sustainable development” (better resource management), and of maximisation of the Internet’s value for society.

Architectures shaping user rights: decentralisation and privacy by design

Systems based on distributed, decentralised, peer-to-peer (P2P) architectures seek their place today in an IT landscape that is mostly one of concentration and removal from users’ machines. From the viewpoint of informational data, personal data and exchanged content, this implies that sharing, regrouping and stocking those data in the most popular, and widespread internet services of today means promoting a model in which traffic is re-directed towards an ensemble of machines, placed under the exclusive and direct control of the service provider. Thus, exchanges between users are made by “copying” data that one wishes to share on one or more external terminals, or by giving these external machines the permission to index this information. The ways in which data circulates, is stored and written in these machines is often uncertain; moreover, the rights that the service provider acquires on such data are often excessive with respect to those maintained by the end user – in such a way that is often opaque for users themselves2.

When the operations of data treatment and handling are conducted, partially or totally, on users’ terminals directly linked together, this choice of network architecture contributes to building specific definitions of privacy protection. It modifies the ways in which the control on informational data, and the responsibility of their protection, are spread out to the users, the service providers and the developers who have created the service.

Three cases of internet services based on a decentralised network architecture – a search engine, a storage platform and a video streaming software, studied between 2009 and 2011 – have shown how a definition of privacy “by design,” more specifically by architectural design, takes shape in internet services (Musiani, 2013b). With this alternative, “techno-legal” way of defining privacy, a central role is attributed to the constraints and the opportunities of privacy protection that are inscribed into the technical model chosen by developers (Schaar, 2010).

Faroo, a P2P search engine developed first in Germany, then in the United Kingdom, displays a “six-levels” distribution model that must prevent the traceability of queries by a central entity; this model is supposed to preserve personal data within the user’s own terminal and the P2P client installed on it – unless they are encrypted on that very terminal before leaving it. This feature also allows the developers to work towards reducing the tension – which is a priori very difficult to eliminate – between the confidentiality of personal information and the personalisation of search queries, the latter being the “added value” that social dynamics add to the search engine and, which is based on the very collection of this personal information.

The case of Tribler, a P2P video streaming tool first developed at the Technical University of Delft (The Netherlands), is another occasion to follow this tension, as the logic underlying the system is that the history of downloads made by a user are shared by default with other users so as to nourish the software’s “recommendation” algorithm. The solution envisaged by the developers has, once again, to do with an idea of “privacy by architectural design”, as it builds on the decentralised and distributed model to mitigate, in the eyes of users, the impression of exposure and revelation of themselves that the system’s social features may provoke: not only can the feature be disabled, but it only sends the download history to other users – it doesn’t keep the information on any server controlled by the service.

Finally, Wuala 3, a (formerly) distributed storage platform developed in Switzerland, displayed similar attempts to protect user privacy via architecture. The heart of this service was the user’s terminal, where, thanks to a dedicated P2P client, the operations of encryption and fragmentation of stored data could take place. These two operations, conducted before any other (e.g., sharing, downloading or circulating data in the network), were meant, in the vision of Wuala’s developers, as evidence given to the users that the service provider, regardless of its intentions, did not even possess the technical means to break user trust in the system.

While developers, across all three case studies, consider that a more articulate protection of privacy is one of the core comparative advantages of their systems (and they “sell” it as such), users wonder, in turn, about the implications of a decentralised architecture for the protection of their data. What does the fact of making available to the whole P2P network a part of one’s own computing resources imply, for the “invisible” data collected there? In the cases of Faroo and Wuala – where the P2P model merges, in a peculiar way, with a proprietary software logic, this question is the occasion to make explicit the difficult articulation between the decentralising philosophy subtending the systems, and a closed source code. Pioneer users – for the most part, users-innovators or users-developers themselves – see the closed code as a lack of transparency, even a lack of respect, that prevents them from delving into this aspect with the tools they have available. It is good to have privacy by architecture, these users point out, but we need to have a direct knowledge of this technique on a case-by-case basis, to, eventually, allow for direct modifications of the architecture.

Decentralised models challenge “by architecture” the extent, the balance and the very definition of the rights obtained by service providers on users’ personal data, vis-à-vis the rights that users maintain on such data. With a trade-off: on the one hand, the user sees her privacy reinforced by the possibility of an augmented control on her data, and its handling by the P2P client. However, simultaneously and for the same reasons, her responsibility for the actions she undertakes within and by means of the application is increased proportionately, as the provider surrenders voluntarily some of his control over the data and content present on the service. The collective dimension of this responsibility is also emphasised, inasmuch as the infraction to the collective behaviour has not only individual but collective consequences- be it the storage of inappropriate content, the introduction of unreliable information or spam in a distributed search index, or a “selfish” management of the bandwidth shared by a P2P streaming system.

Conclusions: how architecture matters

“Arrangements of technical architecture have always inherently been arrangements of power,” writes STS scholar Laura DeNardis (2012): the technical architecture of networked systems does not only affect internet governance, but is internet governance. This governance by architecture, or “governance by design” (De Filippi, Dulong de Rosnay & Musiani, 2013), has important implications at a number of levels, of which the previous section has given but one example.

Changes in architectural design affect the repartition of competences and responsibilities between service providers, content producers, users and network operators. They affect forms of engagement and intéressement (Callon, 2006) in networked systems, of users first and foremost, but also of other actors concerned by the implementation and the operation of internet services. They shape the sustainability of the underlying economic models and the technical and legal approaches to digital content and personal data. They make visible, in various configurations, the forms of interaction between the local and the global, the patterns of articulation between the individual and the collective.

Changes in network architectures contribute to the shaping of user rights, of the ways to produce and enforce law, and are reconfigured in return. A number of legal issues, that go way beyond copyright (despite having often been reduced to this aspect, notably in the case of peer-to-peer systems), are raised by architectural configurations of internet services. To preserve the internet’s “social value,” it is important to achieve reliable forms of regulation – technical, political, or both – without impeding present and future innovation.

Changes in architecture do, finally, contribute to shift the boundary between public and private uses of the internet as a global facility: they are a crucial factor in defining intellectual property rights, the right to privacy of users/clients, or their rights of access to content. They contribute to define what is a contributor in internet-based services, in terms of computing resources required for operating the system, and of content.

In the end, technical architecture appears as one of the strongest, if not the strongest structuring element of internet governance: what is shaped into architecture and infrastructure can seldom be undone by institutional negotiation and dialogue alone, and institutions find it increasingly complicated to keep up with “creative” governance by architecture and by infrastructure4. In this sense, future evolutions of internet governance as a field would do well to take into account Michel van Eeten and Milton Mueller’s suggestion to expand and include innovative areas such as the economics of cybercrime and cyber security, network neutrality, content filtering and regulation, copyright enforcement, and interconnection arrangements among ISPs (van Eeten & Mueller, 2013).

In the digital world, it is possible to design in detail the architecture of the world users interact with – and as a consequence, it is possible to design the architecture of our global communication infrastructure in order to promote specific types of interactions over others (De Filippi et al., 2013). With important consequences for the ways in which the future internet will be governed, and for the extent to which its users will be not only customers, but citizens.

Footnotes

1. Internet governance (IG) today is a lively, emerging field, and its definition relentlessly contested by different groups across political and ideological lines. A “working definition” of IG has been provided in the past, after the United Nations-initiated World Summit on the Information Society (WSIS), by the Working Group on Internet Governance – a definition that has reached wide consensus because of its inclusiveness, but is perhaps too broad to be useful for drawing more precisely the boundaries of the field (Malcolm, 2008): “Internet governance is the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet” (WGIG, 2005). This broad definition implies the involvement of a plurality of actors, and the possibility for them to deploy a plurality of governance mechanisms. IG has been described as a mix of technical coordination, standards, and policies (e.g., Malcolm, 2008 and Mueller, 2010). See also (DeNardis, 2013) and (Musiani, 2013a).

2. See this discussion of the terms of use of several social sites, among which Facebook and Instagram: http://www.nyccounsel.com/business-blogs-websites/who-owns-photos-and-videos-posted-on-facebook-or-twitter/

3. The decentralised mechanism subtending the Wuala system, a trade between local storage space and space in a “P2P storage cloud” spread out to the users, was discontinued in September 2011.

4. An example is the Domain Name System and its co-optations. See (DeNardis, 2012) and (Musiani, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Benkler, Y. (2004). “Sharing Nicely: On Shareable Goods and the Emergence of Sharing as a Modality of Economic Production.” The Yale Law Journal, 114 (2), 273-358.

Callon, M. (2006). “Sociologie de l’acteur-réseau.” In Akrich, M., Callon, M. & Latour, B. Sociologie de la traduction. Textes fondateurs. Paris : Presses des Mines, 267-276.

De Filippi, P., M. Dulong de Rosnay & F. Musiani (2013). “Peer production online communities, distributed architectures and governance by design.” Communication presented at the Fourth Transforming Audiences Conference, September 3, 2013, University of Westminster, London.

DeNardis, L. (2013). “The Emerging Field of Internet Governance”, in W. Dutton (ed.) Oxford Handbook of Internet Studies. Oxford: Oxford University Press.

DeNardis, L. (2012). “The Turn to Infrastructure for Internet Governance”, Concurring Opinions, 2012, http://www.concurringopinions.com/archives/2012/04/the-turn-to-infrastructure-for-internet-governance.html

DeNardis, L. (2009). Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Elkin-Koren, N. (2012). “Governing Access to User-Generated Content: The Changing Nature of Private Ordering in Digital Networks.” In Brousseau, E., Marzouki, M., Méadel, C. (eds.), Governance, Regulations and Powers on the Internet, Cambridge: Cambridge University Press.

Gai, A.-T. (2007). “Web 3.0: une autre branche pour l’arbre des possibles.” Transnets, http://pisani.blog.lemonde.fr/2007/02/17/web-30-une-autre-branche-pour-larbre-des-possibles/

Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press.

Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.

Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA : Terminus Press.

Minar, N. & Hedlund, M. (2001). “A network of peers – Peer-to-peer models through the history of the Internet.” In A. Oram (Ed.), Peer-to-peer: Harnessing the Power of Disruptive Technologies, 9-20. Sebastopol, CA: O’Reilly.

Mitchell, W. J. (1996). City of Bits. Space, Place and the Infobahn. Cambridge, MA: The MIT Press.

Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, MA: The MIT Press.

Musiani, F. (2013a). “A Decentralized Domain Name System? User-Controlled Infrastructure as Alternative Internet Governance”. Presented at the 8th Media In Transition (MiT8) conference, May 3-5, 2013, Massachusetts Institute of Technology, Cambridge, MA. Available as draft at http://web.mit.edu/comm-forum/mit8/papers/Musiani_DecentralizedDNS_MiT8Paper.pdf

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris, Presses des Mines.

Rasmussen, T. (2003). “On distributed society: The history of the Internet as a guide to a sociological understanding of communication and society,” In G. Liestøl, A. Morrison & T. Rasmussen (ed.),  Digital Media revisited : theoretical and conceptual innovation in digital domains, Cambridge, MA: The MIT Press.

Reidenberg, J. R. (1998). “Lex Informatica: The Formulation of Internet Policy Rules Through Technology.” Texas Law Review, 76 (3).

Schafer, V., H. Le Crosnier & F. Musiani (2011). La neutralité de l’Internet, un enjeu de communication. Paris: CNRS Editions/Les Essentiels d’Hermès.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

van Eeten, M. & M. Mueller (2009). “Where Is the Governance in Internet Governance?” New Media & Society, 15 (5): 720-736.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press.

Working Group on Internet Governance (2005). Report of the Working Group on Internet Governance, Château de Bossey, June 2005, http://www.wgig.org/docs/WGIGREPORT.pdf

Wu, T. (2003). “When Code Isn’t Law.” Virginia Law Review, 89.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

La gouvernance des algorithmes

Cet article, en anglais, a également paru sur l’Internet Policy Review. Merci à Frédéric Dubois, Uta Meier-Hahn, Madeline Carr et Johan Soderberg pour leurs commentaires et relectures.

Governance by algorithms

09 Aug 2013 by Francesca Musiani

Note on title [1].

Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes “algorithms” beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations”[2] is often taken for granted. At the same time, they are “invoked as powerful entities that control, govern, sort, regulate, and shape everything from financial trades to news media”[3]. Recently (May 16-17, 2013), an interdisciplinary event organised at New York University has addressed this issue through an interesting lens: that of governance – governance by algorithms in addition to governance of algorithms.

Taking stock of the event, which this author attended, the article seeks to contribute to the discussion of “what algorithms do” and in which ways they are artefacts of governance, providing two illustrative examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. Indeed, the question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

The omnipresence of data, the consequences of their organisation

The role of invisibility in the classification processes that order human interaction, the procedures through which categories are made and kept invisible, the ways in which people can change this invisibility when necessary, and the extent to which systems of classification are crucial to the building of information infrastructures have been core preoccupations of science, technology and society scholars for several years.[4] Yet, the issue of information classification and organisation has perhaps never been as relevant as in our current times of “information overload”[5] and internet-mediated access to the vast majority of the information surrounding us.[6] Indeed, digital data seem to proliferate in the complex world of today, building on the variety of platforms and supports that allow for dematerialisation and rapid circulation and distribution. They serve different purposes, from trading to surveillance, from evaluation to recommendation; they are listed, regrouped and organised by means of many supports and devices, from search engines to e-commerce websites. While companies leverage traces left by consumers on the web so as to better target, customise (and take advantage of) their next purchases and interactions, some users worry about the portraits that such traces allow others to paint of them, and of the impossibility to modify or erase them, left to the perusal of generations to come[7].

Arguing that we are currently entering in the era of big data and algorithms, several authors argue that this “is a major breakthrough in the development of digital services (as it) gives decisive importance not only to the owners of data, but also and especially to those who can make them intelligible”[8]. The algorithms subtending the information and communication technologies we daily use, the internet first and foremost, are (also) artefacts of governance, arrangements of power and “politics by other means”[9].

The power of algorithms

By naming a conference held at New York University last May “Governing Algorithms”, its organisers were making a deliberate choice of ambiguity – hinting at both the governance of algorithms, the extent to which political regulation can affect the functioning of the instructions and procedures subtending technology, and the governing power of algorithms themselves.

The ways in which the pervasiveness of algorithms into human society has political implications appear as a core issue of our times; they are a key feature of both today’s information ecosystem[10] and underlying cultural norms[11], as they contribute to the shaping of the information we access and of its organisation. In a recent paper, communication scholar Tarleton Gillespie highlights six dimensions of political valence for algorithms that have public relevance, i.e., those algorithms that are used to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions”[12]. These six dimensions are:

  • patterns of inclusion, the choices behind the constitution of an index, what is included and excluded in it, and how data is “prepared” for the algorithm;

  • cycles of anticipation, the consequences of attempts, by those creating the algorithms, to have information about their users and make predictions on their future behaviours;

  • the evaluation of relevance, the criteria by which algorithms determine what is not only relevant, but appropriate and legitimate;

  • the promise of objectivity, the way the technical nature of the algorithm is presented as a guarantee of impartiality, particularly in the case of controversy;

  • the entanglement with practice, the processes by which users reshape their practices to suit the algorithms they depend on, and turn algorithms into terrains for political contest;

  • finally, the production of calculated publics, the process of algorithmic presentation of publics back to themselves, and how this shapes a public’s sense of itself.[13]

These six dimensions bring to the fore two main consequences of the “computation” of our information society. By delegating to algorithms a number of tasks that would be impossible to perform manually, the process of submitting data to analysis is automated; and in turn, the results of these analyses automate decision-making. This double automation, in turn, poses the question of agency and control[14]. By asking questions such as: who are the arbiters of algorithms? Is algorithm design an assertion of authority over more than the algorithm itself? What is the autonomy of algorithms, if any? – it is the accountability and the responsibility of algorithms as socio-technical artefacts that is examined, that of their creators and users, and ultimately, of the balance of power facilitated or caused by algorithms.

Algorithmic governance: Part I. Web search

The ways in which the web gives more visibility to some information and content than to other is at the very heart of the recurring debate on the defining features of the digital space as a “public space.”  According to Jürgen Habermas, the “father” of the public sphere concept, two conditions are necessary to structure a public space: freedom of expression, and discussion as a force of integration. The architecture of the “network of networks” seems to articulate these two conditions. However, if the first is frequently recognised as one of the widespread virtues of the internet, the second seems more uncertain[15]. In his book The Wealth of Networks, legal scholar Yochai Benkler argues for a global “order” intrinsic to the web, whose core feature is the fact that the selection of information is no longer the monopoly of gatekeepers, journalists, librarians and editors, but is delegated to internet users, now publishers in their own right. By citing and quoting one another in conversational niches, these individuals and groups single out quality information for algorithms, which, in turn, order and classify them and make them available in search engines.[16] Thus, the ordering of web-hosted information appears as a co-production and co-construction of internet users and computational tools.

Algorithms are delegating the integration of conversations and discussions taking place at the micro level. The aggregated arguments that result from this integration are perceived as “implicit universal consensus”; they have both the strengths and the weaknesses of any information that cannot be traced back to any specific individual, and at the same time, results from a wide assemblage of opinions.[17] Search engines, and the multiple measures underlying the internet hierarchise the visibility of information by proposing it at the very beginning of search result lists, or dissimulating it at the end. By de facto deciding “what must be seen,” they are susceptible to encourage or discourage controversy and discussion – while constructing the public agenda of political and social priorities in the process, as well as selecting interlocutors that matter.[18]

In particular, thanks to the current quasi-monopoly that Google holds on web search practices, its PageRank algorithm has been widely examined as the new gatekeeper[19] and “benevolent dictator”[20] of the digital public spaces and spheres. The algorithm implements, according to a “recipe” that partly remains an industrial secret, different sets of measurement criteria that assess authority (according to the number of citations), audience (according to the number of visits or clicks), proximity and affinity (according to recommendations) or speed (according to real-time aggregation and relay of “hot” topics). PageRank, as the “master switch” of the internet,[21] centralises and organises the circulation of information in the network of networks, and for every search interrogation and request, arbitrates on what’s important and relevant.

Algorithmic governance: Part 2: Recommendations in e-commerce[22]

For some years now, online seller Amazon has been “a remarkable prescriber”, whose prescriptions are based on the recommendations of its readers/buyers. The vendor’s website makes it possible for each of its subscribed users to know, in a single click, about other purchases made in the past by users who have acquired the same title[23]. Personalised recommendations are not something new in the world of book publishing and selling, be they digital or not. Simply, a librarian remarks ironically, they historically have been “the exclusive purview of booksellers, librarians… and friends. Now your best friend for advice on reading is called ‘recommendation Al Gorithm’… and it loves you very much!”[24]

Indeed, it is on the systematisation and automation of a very widespread and very social phenomenon – the exchange of advice and guidance among users, sharing preferences and affinities – that Amazon and other online sellers base their recommendation systems. Drawing from methods based on both content (considering two books “similar” if they share a large number of words) and collaborative filtering (the intersection of lists containing particular books and lists based on previous records of books purchased or borrowed by readers), Amazon has developed an algorithm called “item-to-item collaborative filtering”. Its details remain an industrial secret, but the algorithm displays every day its effectiveness in “personalising” recommendations according to the interests of each of its consumers. As its name suggests, rather than match a user with similar users, this algorithm relates each item ordered and purchased by users with similar items, and eventually combines them in a recommendation list.[25]

Behind this algorithm – and causing readers/buyers to think that Amazon knows very well, perhaps too well, their tastes – lie years of research and experiments in a recent subfield of computer science whose practical applications are increasingly widespread, albeit discrete: data mining, in particular affinity analysis and market basket analysis.[26] For readers looking for new things to read, suggestions similar to their previously purchased articles are constructed by relying on a mix of several sources of information about them, feeding a large database where they are combined with other shopping histories. This information can range from the most obvious demographics about oneself and close relatives, to more complex assessments based on the sites one consults before arriving at Amazon, or one’s “habit of clicks.” The entanglements within this large database about the purchasing behaviour of users, activated in accordance with Amazon’s patented algorithm, are the basis of the suggestions familiar to the user, such as “Recommended as you bought…” or “Recommended because you add X to your wish…”, and influence book purchases on Amazon every day.

Algorithms and rules, rule by algorithm

We live in an increasingly algorithmic world. This article has examined, in particular, two cases related to web-based information and communication technologies where the importance of algorithms is high and their presence pervasive. However, the invisible computational structures that guide our search results and our online purchases extend to a number of other contexts, in which algorithms are deployed and regulatory work has been insistently called for in face of recent crises, from facial recognition software to financial markets.[27]

The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, and more generally, of the governance of the complex, automated systems that permeate today’s world.

The academic landscape in the interdisciplinary fields of communication studies, internet studies and science and technology studies reflects a thriving and increasing interest for this question. As an additional path towards answering the key question, “who does the algorithm serve?”, scholars also investigate the historical process from which the algorithm has emerged as a key topic of our times and attempt to situate it in the larger context of political economy.[28]

As not only academic research but current news show ever more frequently,[29] two faces of the algorithms/rules relationship are currently under scrutiny, and are likely to be even more in the close future. On the one hand, there is the issue of institutions’ ruling of algorithms. Should the locus of legal reasoning related to these systems shift to the coding of algorithms? Should regulation, or further regulation, of algorithms be pushed or advocated for in specific contexts? What would this regulation look like, would it even be possible, and what effects would it cause?[30]

On the other hand, the extent to which we live in a world ruled by algorithms has to be assessed. We need to research not only the extent to which, given the ubiquity of algorithms, they regulate us in a sense, but also “what it would mean to resist them”.[31]

References

[1] This article is partially a recollection and account of the Governing Algorithms conference held at New York University on May 16-17, 2013.

[2] Gillespie, Tarleton (2013). “The Relevance of Algorithms”. Forthcoming in Media Technologies: Essays on Communication, Materiality, and Society, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/1-paper-gilles…

[3] http://governingalgorithms.org/

[4] Bowker, Geoffrey C. & Susan Leigh Star (1999). Sorting Things Out: Classification and Its Consequences. Cambridge, MA: The MIT Press.

[5] Flew, Terry (2008) New Media: An Introduction (3rd Ed.). Oxford : Oxford University Press.

[6] Cardon, Dominique (2013). “Présentation”. Dossier Politique des algorithmes, Réseaux, 177 (1): 9-21.

[7] On the Internet’s “persistent memory” and the so-called “right to be forgotten”, championed by the EU in the recent past, see e.g. Beckles, C.-A. (2013). “Will the Right to Be Forgotten Lead to a Society That Was Forgotten?”, Privacy Perspectives, https://www.privacyassociation.org/privacy_perspectives/post/will_the_ri… or the critical Harris, L. (2013). “How to fix the EU’s ‘Right to be Forgotten’”, The Huffington Post, http://www.guardian.co.uk/technology/series/internet-privacy-the-right-t…

[8] Cardon (2013), p. 10.

[9] Latour, Bruno (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press , p. 229.

[10] Anderson, C. W. (2011). “Deliberative, agonistic, and algorithmic audiences: Journalism’s vision of its public in an age of audience”. Journal of Communication, 5: 529-547.

[11] Striphas, Ted (2009). The Late Age of Print: Everyday Book Culture from Consumerism to Control. New York, NY: Columbia University Press.

[12] Gillespie (2013), p. 2.

[13] Ibid., pp. 2-3.

[14] Barocas, Solon, Sophie Hood & Malte Ziewitz (2013). “Governing Algorithms: A Provocation Piece”. Discussion Paper for the Governing Algorithms conference, NYU, May 16-17, 2013. Available at SSRN: http://ssrn.com/abstract=2245322 or http://dx.doi.org/10.2139/ssrn.2245322

[15] Cardon (2013), p. 11.

[16] Benkler, Yochai (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press. (pp. 33-35).

[17] Geiger, Stuart (2009). “Does Habermas Understand the Internet? The Algorithmic Construction of the Blogo/Public Sphere”. Gnovis: A Journal of Communication, Culture and Technology, 1 (10).

[18] Cardon (2013), p. 11.

[19] Smith, Dan (2013). “Google: Gatekeeper of the Internet’s Grey Area”. The Telegraph, June 10, 2013. Available at http://www.telegraph.co.uk/sponsored/technology/technology-trends/101039…

[20] Masnick, M. (2008). “Google As Benevolent Dictator: The Gatekeeper and the Data Collector”. TechDirt, December 2008. Available at http://www.techdirt.com/articles/20081201/0119292980.shtml

[21] Wu, Tim (2010). The Master Switch: The Rise and Fall of Information Empires. Random House Digital, pp. 279-280.

[22] This section is partly based on an article I wrote in French in March 2012: Musiani, Francesca (2012). “‘Bienvenue sur votre Amazon’: les systèmes de recommandation d’ouvrages”, Labs Hadopi. Available at http://labs.hadopi.fr/actualites/bienvenue-sur-votre-amazon-les-systemes…

[23] Benhamou, Françoise (2012). “3e étape de la stratégie verticale d’Amazon”. Blog L’Eco(nomie) des Livres, October 24, 2012. Available at http://www.livreshebdo.fr/weblog/l-eco%28nomie%29-des-livres-24/776.aspx

[24] Lemaire, Alexandre (2011). “Madame Machine, pouvez-vous me conseiller un bon livre? Les nouveaux outils Web de recommandation de lectures”. Association des Bibilothécaires de France, June 27, 2011. Available at http://bibliolab.fr/cms/content/les-nouveaux-outils-web-de-recommandation

[25] Linden, Greg, Brent Smith & Jeremy York (2003). “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”. IEEE Internet Computing, 7 (1): 76-80. Available at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1167344&userType…

[26] http://en.wikipedia.org/wiki/Affinity_analysis

[27] Hardt, Moritz (2013). “Occupy Algorithms: Will Algorithms Serve The 99%?” Response Paper for the Governing Algorithms Conference, NYU, May 17, 2013. Available at http://governingalgorithms.org/wp-content/uploads/2013/05/2-response-hardt.pdf

[28] Berry, David (2012). “The relevance of understanding code to international political economy”. International Politics, 49: 277–296.

[29] E.g. BBC News (2011). “Disappearing tycoon Souter blames Google”, September 12, 2011. Available at http://www.bbc.co.uk/news/technology-14884717

[30] Barocas, Hood & Ziewitz (2013), see supra note 14.

[31] Ibid.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Un regard critique sur les architectures décentralisées

Un article de 2012, par Arvind Narayanan, Solon Barocas, Vincent Toubiana, Helen Nissembaum and Dan Boneh, librement disponible en ligne sur la plateforme ArXiv, porte un “regard critique” sur les architectures de réseau décentralisées dans des contextes d’application qui impliquent un traitement des données personnelles.

Le problème central auquel se confronte le papier est la distance entre la promesse des architectures décentralisées, face à la centralisation progressive des fournisseurs de service, et le  manque d’application à large échelle, sauf exceptions, de ces architectures. En parallèle aux avantages, le papier discute les inconvénients de la décentralisation, qui restent, d’après les auteurs, souvent cachés sous la promesse d’une plus grande liberté et protection.

« …for all these efforts, decentralized personal data architectures have seen little adoption. This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. »

« for the most part decentralized social networking appears not to have anticipated the success of mainstream commercial, centralized social networks, but rather developed as a response to it. »

« we present some underappreciated drawbacks of decentralized architectures. Not all of these apply to all types of systems, nor is any of them individually a decisive factor. But collectively they may help explain why decentralization faces a steep road ahead, and why even if adopted, decentralization will not necessarily provide all the benefits that its proponents believe will automatically flow from it. »

« We hope to kick off a more tempered discussion of the future of personal data architectures in both scholarly and hobbyist/entrepreneurial circles, one that is informed by the lessons of history. There is much work to be done along these lines — application of economic theory can shed light on questions such as the relative strength of network effects in centralized vs. decentralized systems. Empirical methodology such as user and developer interviews would also be tremendously valuable. »

On est ravis d’apprendre qu’on est en train de contribuer à une démarche intéressante.  ;-)

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Réinventer l'”annuaire téléphonique” de l’Internet? Institutions, Industries, Infrastructures

Image 2Le 19 Avril 2013, Francesca Musiani, en sa qualité de Yahoo! Fellow à l’Institut d’études diplomatiques de la School of Foreign Service, Georgetown University (Washington, DC) a organisé une conférence intitulée «Réinventer l'”annuaire téléphonique” de l’Internet? Institutions, Industries, Infrastructures ». Depuis la création de l’Internet, l’utilisation de noms de domaine, adresses, protocoles et autres infrastructures sous-jacentes au “réseau des réseaux” comme instruments de pouvoir et de gouvernance a joué un rôle crucial dans le maintien de sa stabilité, face à toutes ses évolutions. Dans l’Internet d’aujourd’hui, ces outils sont de plus en plus mis à profit par des entités politiques à des fins différents de ceux pour lesquels ils ont été initialement conçus. Cette conférence, dont on présente un compte-rendu détaillé en anglais, a abordé plusieurs thèmes chers à ADAM dans son exploration des implications politiques, sociales et techniques du “turn to infrastructure” dans la gouvernance de l’Internet. Les participants à cette conférence se sont concentrés sur un aspect particulièrement controversé de l’infrastructure Internet: le système de noms de domaine (DNS), ou l'”annuaire téléphonique” de l’Internet. Une version PDF du rapport est disponible sur le site de l’Institut d’études diplomatiques.

Reinventing the Internet’s Phone Book? Institutions, Industry and Infrastructure

A Conference Account

Francesca Musiani (2012-13 Yahoo! Fellow in Residence, ISD, Georgetown University)

With the collaboration of Chris Haley & Allison Maranuk (2012-13 Yahoo! Junior Fellows, MSFS, Georgetown University)

Note to the Reader: This account is intended as a follow-up resource for conference participants, for individuals who expressed interest but were unable to attend the conference, and more broadly for people interested in Internet governance, particularly DNS governance, issues. While we have paid a great deal of attention in being as accurate as possible, in no case portions of this text should be considered as direct quotes from the speakers’ remarks. Thank you to all the speakers and moderators for sharing their insights, and to Chris and Allison for the diligent note-taking. I take full responsibility for whatever inaccuracy is left. FM

On April 19, 2013, the Institute for the Study of Diplomacy at Georgetown University’s School of Foreign Service hosted a conference entitled “Reinventing the Internet’s Phone Book? Institutions, Industry and Infrastructure”. Since the Internet’s foundation, the use of domain names, addresses, protocols, and other underlying infrastructures as instruments of power and governance has been crucial in maintaining stability throughout its evolution. In today’s Internet landscape, these tools are increasingly being leveraged by political entities for purposes other than those for which they were designed. This conference set out to explore the political, social, and technical implications of this tendency, by focusing on a particularly controversial aspect of Internet infrastructure: the Domain Name System (DNS), or the Internet’s “phone book.” Three organizations and institutions co-sponsored the event: the Yahoo! Fund on Communications Technology, International Values, and the Global Internet; American University’s School of International Service; and the Global Internet Governance Academic Network (GigaNet).

 

Internet governance by infrastructure: the case of the Domain Name System

Francesca Musiani, Yahoo! Fellow in Residence at the ISD for 2012-13 and the event’s host, first introduced the topic of the day’s discussion. This required, initially, to briefly touch upon the definition of Internet governance, which she described, based on the 2005 definition by the Working Group on Internet Governance, as the development and application, by relevant actors in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. This definition, despite its inclusiveness, has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors. In particular, the role of governments is central and ambiguous, and other aspects of internet governance are controlled by transnational organizations. One should be careful about simplifying ideological extremes in discussing IG: the public is sometimes under the impression, fostered by media, that IG is entirely performed by a handful of institutions – which is not the case. All of this often leads to neglect or disregard what is, instead, a crucial aspect of Internet governance: there are a number of components of the Internet’s infrastructure and technical architecture in the design of which are embedded, to some extent, arrangements of governance. These are technologies and processes beneath the layer of content and inherently designed to keep the Internet operational: Internet Protocol addresses are an example, and there are many more, but the one the conference wishes to address is the Domain Name System, or DNS.

The DNS translates between alphanumeric domain names and their associated IP addresses necessary for routing packets of information over the Internet. For this reason, it is oftentimes called the Internet’s “phone book”. It is a wide database management system, arranged hierarchically but distributed globally, across countless servers. The Internet’s root name servers contain a master file known as the root zone file, listing the IP addresses and associated names of the official DNS servers for all top‐level domains (TLDs). The management of the DNS has always been a central task of Internet governance, and ICANN is ultimately responsible for managing the assignment of domain names (delegated through Internet registrars), and for controlling the root server system and the root zone file.

There have been a number of controversies in this area, involving institutional and international power struggles over DNS control, and issues of legitimacy, democracy, and jurisdiction. Notably, debates have addressed the historical ties between ICANN and the United States government in face of increasing internet globalization; this controversy continues to be a heated topic in Internet governance discussions. There are additional policy implications in the DNS: it was originally restricted to ASCII characters, precluding the possibility of domain names in many language scripts such as Arabic, Chinese or Russian. Internationalized domain names (IDNs) have now been introduced. Furthermore, in 2011, ICANN’s board voted to end most restrictions on the generic top-level domain names (gTLD) from the 22 currently available.Companies and organizations will now be able to choose essentially arbitrary top-level Internet domains, with implications for consumers’ relationships to brands and ways to find information on the Internet. Further DNS issues concern the relationship between domain names and freedom of expression, security, and trademark dispute resolution for domain names.

While this covers quite a lot of ground already, this conference aimed at taking one further step. In recent years, we witness a number of (more or less successful) attempts, by political and private entities, to co-opt infrastructures of internet governance for purposes other than the ones they were initially designed for. Not only is there governance of infrastructure, but governance is carried out by infrastructure… using infrastructure in “creative ways”, so to speak. As DeNardis (2011) explains: “Forces of globalization and technological change have diminished the capacity of sovereign nation states and media content producers to directly control information flows. This loss of control over content and the failure of laws and markets to regain this control have redirected political and economic battles into the realm of infrastructure.” Examples of how content mediation controversies have shifted into the realm of Internet governance infrastructure can be found, for example, in the intentional outages of basic telecommunications and Internet infrastructures, enacted by governments via private actors, whether via protocols, application blocking, or termination of access services. The government-initiated Internet outages in Egypt and Libya, in the face of revolution and uprisings, have illustrated this and may have set a dangerous precedent.

However, the domain name system is perhaps, nowadays, the best illustration of this “governance by infrastructure” tendency. Domain name seizures that use the domain name system to redirect queries away from an entire web site, rather than just the infringing content, have been considered as a suitable means of intellectual property rights enforcement. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA). Governance by infrastructure enacted by private actors was also visible during the WikiLeaks saga, when Amazon and EveryDNS blocked Wikileaks’ web hosting and domain name resolution services. The conference addresses these controversies, with the aim of understanding the extent to which matters of Internet governance using infrastructure entail not only issues of economic freedom – but of Internet freedoms.

 

The DNS today: enforcement, security and mobilizations

The first panel, moderated by Derrick Cogburn, Associate Professor, School of International Service, American University, featured panelists Steve Crocker, CEO, Shinkuro, Inc. & Chair, ICANN Board, Matthew Schruers, CCIA & Adjunct Professor, Georgetown University, Scott McCormick, Consultant, McCormick ICT International, and Luke Pelican, Consultant, Ammori Group.

Dr. Steve Crocker, an internet pioneer and author of the first Request for Comments of the Internet Engineering Task Force (IETF), has been involved in the development of internet since its startup in the late 60s and 70s. His opening remarks, he suggested, would probably be a counterpoint to the introductory talk and most of the day’s discussions.

It is interesting to see how attractive the idea of Internet governance has become to such diverse groups, and the range of issues it covers. It could be useful to ask again the question: what is it that has to be governed? There are three main sets of issues.

First of all, we all have a shared interest in the system. A threat to its security is bad for the public as a whole, and maintaining operation of it is important to everyone: the system has to continue to work. Contrary to popular belief, many threats are in fact not malicious, they are accidents or otherwise caused by the overloading of the system or some of its components, and its disruption via single or multiple points of failure. Secondly, some coordination of scarce resources is needed; however, the extent to which there are scarce resources on the Net is, in fact, debatable. The Internet Corporation for Assigned Names and Numbers (ICANN) is responsible for maintaining unique identifiers in the domain name space. Originally there were 4 domains, and eventually, it was decided to attach human-friendly names to those numbers. In the beginning, the majority of the connections were in the US, with only a  few international connections; since the beginning, however, there was the idea of a system as distributed as possible and over time, pressures increased to expand it. Originally, there were about 4 billion IP addresses available. In the DNS’s early days, it was thought that this number would last forever – now, the IPv4 system is close to depletion. We will now see a rise of the IPv6 system, which will take some transition, and in this transition period, there may be some issues as IPv4 and IPv6 are not born interoperable. Thirdly, some governance is needed for the suppression of undesired behavior, from impolite speech to identify theft, from espionage to extortion and of course, child pornography. This is a controversial area, of course, because “one man’s freedom is another man’s pain”.

As the Internet began to grow, there was some conversation about who would be in charge of all this. First, Jon Postel single-handedly managed the system, simply updating the hosts.txt directory when needed. Of course, this quickly became too much, so ICANN was created and incorporated as a non-profit in California. It has relations with the US government due to the renewal of its contract with the Commerce Department to perform the Internet Assigned Numbers Authority (IANA) functions. Today, Internet governance brings in a lot of people who want to use the Internet as a pawn in their own objectives, but are not acting in the internet’s best interest. What has made the Internet blossom is to make it as unrestricted as possible (in stark contrast to the telephone system), leaving innovation at the edges, and the same principle applies to the DNS. As there is no technical reason to either change the structure or to prohibit additional domain name systems from being created, ICANN’s last “big decision” has been to lift most restrictions on gTLDs and opening up an application process.

Law scholar Matthew Schruers centered his remarks on the relationship between copyright and Internet architecture. As the internet expands, the scope of government power is far more limited. Governments found it easier to regulate information intermediaries, rather than the source itself. The scope of the power of the government to regulate the Internet is more about its ability to regulate the intermediaries rather than the specific sources of information. There are four regulation forces, or tools: law, norms, architecture and markets. We are increasingly witnessing attempts to regulate architecture in order to regulate something else. SOPA and PIPA were the extension of Congress strategies to regulate intermediaries, and this included the DNS. Within these debates, and given the very different levels of technical competence on the Hill, the phone book model became really important, because it could clearly convey the idea that these laws were like removing pages from the phone book. As we will see later, SOPA and PIPA did not come into force because of widespread public outcry. These law projects would have allowed law enforcement agencies to seize domain names as if they were physical property; by removing the domain name, users would still be able to get to the website by using the IP address, but wouldn’t be able to get to it by typing in the alphanumeric address – and for most people this is a big enough obstacle.

The way in which architecture regulates is not the same way in which law regulates. Norms for a particular type of conduct are very fluid, in terms of the community and how it applies; laws enforce themselves in a leaky way (especially IP law), and they need to be enforced by a judicial system. Architectural enforcement is, in this sense, “perfect”: with laws, compliance is voluntary, we comply with them by choice; while with architecture-based enforcement, compliance is coerced, there is no choice. Finally, law is inherently nuanced, and there are exceptions to it; architecture is absolute, it allows a possibility or it doesn’t, and there is no capacity for exceptions. The US Government is an example of this: recently, it used an intermediary, Go Daddy, to seize domain names in Spain; in Spain, this was lawful but in the US it was not. Another example is the Dajaz1 website, which sometimes let out pre-releases of songs (often leaked to the website by the music promoters), so the RIAA urged the US government to seize the domain via the Utah-based Fast Domain, Inc. It turned out that the legal basis, in both of these cases, was not sound, and the sites were reinstated, but in the end, free speech was suppressed a priori for two years.

Luke Pelican introduced the SOPA/PIPA controversy and the role of civil society in successfully putting a stop to the legislation. Both bills (the acronyms stand respectively for Stop Online Piracy Act and PROTECT IP, itself an acronym of Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act) were aimed at combating digital piracy, and presented to the public as legislation that would help protect US jobs and industries. Critics, on the other hand, said these bills undermined Internet freedom and threatened free speech, and could actually harm the US economy, as startup companies dependent on user-created content were more likely to be sued under the legislation.

Further complicating the controversy were challenges in explaining some of the technical problems to the general public. Companies, public interest groups, and technical experts reviewed the technical provisions in the bills and raised their concerns publicly, concerns which other groups turned into meaningful action. Fight for the Future, an activist group, led a campaign against a related copyright bill in October 2011, arguing that if the bill became a law, then people like Justin Bieber could have been sent to jail instead of becoming musical successes. The “Bieber in Jail” campaign received a lot of attention from various media groups and shows like the Colbert Report. During American Censorship Day, a protest of SOPA and PIPA held on November 16, 2011, several advocacy groups framed the issue of these bills as the imposition of an American censorship system rather than about the problem of piracy. The blogging platform Tumblr auto-censored its site as part of this awareness campaign and encouraged their users to contact Congress. Overall, the American Censorship Day protests resulted in 84,000 phone calls and over a million emails to Congress, one of the biggest public outcries over an Internet-related issue. It seemed to be a forgone conclusion that these bills would pass, so, on January 18, 2012, over 115,000 websites joined in a massive web “blackout” as part of a concerted effort to stop the legislation. DNS blocking provisions were included both in SOPA and in PIPA; eventually, the sponsor of the SOPA said he would remove these provisions, after talking with technical experts. The SOPA/PIPA case is likely to have encouraged more people, including lawmakers and regulators, to learn some of the technical aspects of the Internet’s daily workings, and have a better understanding of how this facility we use daily works in practice. And this is a positive outcome that exceeds the stalling of the bill.

 

New actors in Internet governance: privatization, infrastructure, alternatives

The afternoon panel, moderated by Nanette Levinson, Associate Professor, School of International Service, American University, broadened the discussion to evolutions in Internet governance and actor participation in it, from the private sector’s increasingly crucial role in content regulation, and in placing restrictions on freedom of expression, to peer production collectives proposing “creative disruption” as a response to infrastructure-based enforcement. The discussion featured panelists Fiona Alexander, Associate Administrator, Office of International Affairs, National Telecommunications and Information Administration; Matthew Hindman, Associate Professor, George Washington University; Francesca Musiani, Yahoo! Fellow, ISD, and Shane Tews, Chief Policy Officer, 463 Communications.

Francesca Musiani, Yahoo! Fellow at the ISD, presented preliminary findings from her current research project. She argued that, in a discussion about new actors and changing balances in IG, it was worth including a discussion about the people who think about “second-degree” governance by infrastructure: people who, instead of addressing the DNS in its current form, look for ways to build an alternative one.

Between 2010 and 2011, the WikiLeaks case prompts a new wave of discussions about a “new competing root-server”, able to rival ICANN. An alternative domain name registry is envisaged, a decentralized, peer-to-peer (P2P) system in which volunteer users would each run a portion of the DNS on their own computer, so that any domain made temporarily inaccessible may still be accessible on the alternative registry. Instead of simply adding a number of DNS options to the ones already accepted and administrated by ICANN and its registrars, this project would try to supersede ICANN in favor of a distributed, user infrastructure-based model. There are a number of issues and open questions with this project. There are two fundamental operations that are served by the DNS: name registration and name resolution, that are usually though of jointly, but one could foresee replacing just one of them. The function that a P2P DNS project would be tackling (alternative root? .p2p top-level domain?) needs to be stabilized. P2P architecture does not allow for simultaneous optimization of all needed features, but calls for compromises. Finally, even if the alternative takes hold, a long co-existence should be expected.

There are social and political conditions of feasibility for radical alternatives such as P2P DNS. In the case that any of the decentralized DNS projects matures to the stage of relevant user appropriation, the crucial issue may become trust of users in other users: users will need to rely on other peers in the network to direct them, and it is one thing to trust OpenDNS, Google etc. but completely another thing to do the same with a random computer. And finally, it is a matter of governance: the original questions that cause P2P DNS proposals to proliferate are deeply political: they are about control, freedom, and censorship. Technical solutions to controversial issues that have a political component to them should, at some point, be accompanied by evolutions of institutions, lest the governance of the Internet be reduced to a war of surveillance and counter-surveillance technologies, of infrastructure cooptation and counter-cooptation.

 

The “Turn to Infrastructure” and the future of IG

In the conference’s final keynote, Laura DeNardis, Associate Professor in the School of Communication at American University, tied together the themes discussed during the day, placing particular emphasis on recently raised concerns about the future of Internet governance, and on the need to preserve interoperability. Most of these issues are discussed in her book “The Global War for Internet Governance”, forthcoming with Yale University Press. The book describes the different layers of how internet governance works; outlines the current state of global debates, and the balance of global political and economic powers related to Internet governance, civil liberties and national security, innovation policies and the preservation of the decentralized nature of the Internet.

Internet governance functions, even though technologically complex and often outside of public view, are becoming political proxies for global political struggles and conflicting values. In this context, the DNS is one important (and relatively well-working) component of a broader Internet global ecosystem. The very definition of Internet governance is contested but it is generally referred to as the design and administration of the technologies necessary to keep the Internet operational, as well as the debates around those technologies, such as critical Internet resources, standards, and protocols needed to operate the network. There is an intersection between Internet architecture and content mediation; people’s Internet access is cut off (or access restrictions are discussed) to control content sharing and communication. The evolutions in Internet connectivity, a highly private area mostly under the control of Internet companies and their agreements, raise a number of concerns in terms of stability and censorship. The conference has addressed three main themes.

First, “arrangements of technical architecture are arrangements of power” And “Infrastructure is never just infrastructure,” and is also about some understanding of complex technical systems such as the DNS; large-scale debates and mobilization, such as  the SOPA and PIPA debates; the technical complexity is often paralleled by the complexity of institutions; political structures are often embedded technological hybrids. As science and technology studies Susan Leigh Star once said, we need to invert the common sense notion of infrastructure, taking what has often been seen as ‘boring’ and behind the scenes, bringing it to the floor. Internet governance scholars such as the organizers of this conference, all involved in GigaNet, embrace this perspective in relation to Internet governance.

Second, information technology infrastructure is becoming a proxy for power control, a move that is bound to have a number of unintended consequences. Corporate media producers have lost power over the monetization of their content and are looking to infrastructure as a means of reacquiring that power; some global choke points, despite the Internet’s overall decentralization, do exist and the extent to which they are subject to “stress fractures” deserves close consideration. While these control points – some virtual, some material, most often a hybrid of both – do exist, there is often not enough public understanding of how technology works.

Third, the multi-stakeholder discussion often reveals its limits, mostly in contexts of privatization of internet governance. Much Internet governance is being done through new forms, not governments; examples are regional internet registries and private telecom companies managing the Internet’s backbone. Privatized areas are enacting policies and we are often moving from governments to private sector as Internet governance’s crucial actor. From “delegated censorship” to “delegated law enforcement”, the spotlight is on private entities.

These three themes raise the question of what are the challenges to the future of Internet governance, and therefore, to Internet freedom. First, there needs to be a focus on issues of interoperability, which is easy to take for granted.  In many ways, we have more connectivity than ever. But there is not interoperability between social media platforms, Internet voice software, or cloud computing services in the same way there is in email or web services. For example, Skype, while an excellent application, is based in part on proprietary approaches. There is a shift from an open, unified web in which the publication of open standards has helped foster innovation and compatibility among products to an environment that de-prioritizes interoperability and places constraints on interconnection. Constraints on interoperability are constraints on innovation itself.

The DNS is a foundational technical system necessary for the Internet’s operation, handling billions of queries per day, and it is increasingly used for content blocking functions for which it was not designed. If DNS query resolution is not universally consistent, this may have serious implications for the universality and stability of the global Internet.

To conclude, the Internet is governed while being in a state of constant flux, and a very complex system; its governance entails issues of both private control and civil liberties; it requires technical design as well as new institutional reforms; this governance is not fixed, anymore than technical architecture is fixed. The consequences of changes to this system should be carefully examined as we move forward.

 

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Les nouveaux gTLDs: le défi de l’Europe

Cet article, en anglais, a également paru sur l’Internet Policy Review le 6 juin 2013. Merci à Frédéric Dubois, Uta Meier-Hahn et deux relecteurs pour leurs commentaires et retours.

New global top-level domain names: Europe, the challenger

06 Jun 2013 by Francesca Musiani

“There are roughly two dozens now, but soon, there could be hundreds[1],” writes the Internet Corporation for Assigned Names and Numbers (ICANN), the organisation responsible for managing and coordinating the system of unique identifiers and names on the internet – on its webpage dedicated to the creation and forthcoming implementation of the new generic top-level domain names (gTLDs).

gTLDs are the highest level of domain names in the domain name system (DNS), including .com, .net and .org; their number has been restricted to twenty-two for several years, and ICANN has implemented several restrictions on the ways in which they are operated. Thanks to the new gTLDs programme, businesses and organisations are now able to apply for their own customised top-level domain names, thereby greatly expanding their current number. ICANN’s move is the most recent controversial one in a subfield of internet governance, the management of the Domain Name System (DNS) of the “network of networks”, which is already rife with political and economic controversies. What are the implications of this “turn” to new gTLDs? This article attempts to outline them, and, it addresses the impact of the new gTLDs programme on Europe’s action-taking in the internet governance realm. The article also considers the likely impact of the new programme on ICANN’s governance and weight vis-à-vis other important internet governance actors.

The Domain Name System and ICANN: an internet governance “hot potato”

The Domain Name System of the internet establishes the domain name space in the same way that the Internet Protocol establishes the Internet address space[2]. The DNS translates between alphanumeric domain names and their associated internet protocol (IP) addresses necessary for routing packets of information over the internet. For this reason, it is oftentimes called the internet’s “phone book”.

The DNS, through this address resolution process, handles billions of queries per day. In a very simplified way, the DNS can be described as a wide database management system, arranged hierarchically but distributed globally, across countless servers. The internet’s root name servers contain a master file, the root zone file, listing the IP addresses and associated names of the official DNS servers for all top‐level domains. The management of the DNS has always been a central task of internet governance, and ICANN is ultimately responsible for managing the assignment of domain names (delegated through internet registrars), and for controlling the root server system and the root zone file.

There have been a number of controversies in this area, that continue to this day, involving institutional and international power struggles over DNS control, and issues of legitimacy, democracy, and jurisdiction. Notably, debates have addressed the extent to which the privileged historical ties between ICANN and the United States government continue to exist, despite the  increasing internationalisation of the internet, which may call for a more prominent role of other countries in ICANN governance; this controversy continues to be a heated topic in internet governance discussions. There are additional policy implications in the DNS: it was originally restricted to ASCII characters, precluding the possibility of domain names in many language scripts such as Arabic, Chinese or Russian. Internationalised domain names (IDNs) have been introduced in May 2010. Further DNS issues concern the relationship between domain names and freedom of expression, security, and trademark dispute resolution for domain names.

The DNS is perhaps, nowadays, the best illustration of governments’ and companies’ tendency to govern or manage the internet by co-opting infrastructures of internet governance for purposes other than the ones they were initially designed for[3]. Domain name seizures that use the DNS to redirect queries away from an entire web site, rather than just the infringing content, have been considered as a suitable means of intellectual property rights enforcement – to be carried out by internet registries, internet registrars, or even DNS operators such as internet service providers. DNS-based enforcement was at the heart of controversies and internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA) (Ammori, 2011). Governance by infrastructure enacted through the DNS by private actors was also visible during the WikiLeaks saga, when Amazon and EveryDNS blocked Wikileaks’ web hosting and domain name resolution services[4].

Generic top-level domain names

Top-level domains are the highest level of domains in the DNS,  installed in the root zone of the name space; generic TLDs, a category of these highest-level domains[5], are familiar to the public as widely used internet addresses’ suffixes such as .com, .net, and .org. They can be either unsponsored – domains that operate under policies established by ICANN “on behalf” of the global internet community – or sponsored, proposed and funded by private agencies or organisations that establish and enforce the rules restricting the use of the domain. The number of gTLDs has been slightly increasing since ICANN’s inception, but has stabilised at twenty-two for several years.

Over the years, the demand for more gTLDs has been constant, just as has ICANN’s consideration of many proposals, by different actors, for practical ways to go about their implementation. These proposals range from adoption of policies for unrestricted gTLDs to chartered gTLDs for specialised uses by dedicated organisations. ICANN’s new gTLD programme, approved in June 2011 under the banner of “promot[ing] competition in the domain name market while ensuring internet security and stability[6]”, ends most restrictions on gTLDs and allows businesses and other organisations to apply for their own customised top-level domain names. This constitutes the first significant expansion of the system in existence today, and has the potential of carrying important implications for the future of the DNS, if not in the way the internet operates, in terms of potential changes in “the way people find information on the internet or how businesses plan and structure their online presence[7]”.

The unveiling of the new gTLDs programme

Roughly a year after its announcement of the programme, ICANN held a press conference in London to mark the “Reveal Day[8],” during which its Senior Vice President Kurt Pritz noted that over 500 companies and organisations had applied for nearly 2,000 TLDs. The announcement was not exempt from controversy, for a number of reasons. United States-based organisations and companies accounted for more than half of the applications, with the domain name registry Donuts applying for more than three times the number of gTLDs as the next largest applicant[9]. This US focus is possibly attributable to an issue of cost: ICANN set the fee for each TLD application at $185,000, while noting that financial assistance to organisations that wanted to register for TLDs but could not meet the applications fees was provided, and the geographical spread was, in fact, wider than it expected – ICANN’s CEO, Rod Beckstrom, was quoted as saying that “To have 17 applications from Africa is actually encouraging, it’s a significant expansion[10]”.

While emphasising the positive side of the programme’s goals (“enhancing competition and consumer choice, and enabling the benefits of innovation[11]”, in addition to increased control, innovative business models, and even community engagement and geographic celebration[12]), ICANN had been adamant about the responsibilities that applying for a new gTLD would entail. These include the preservation of some financial stability over a minimum of three years, compliance with all the obligations of the registry agreement with ICANN (with enhanced restrictions when running a community-based TLD), and employment of highly skilled technical operators. Thus, ICANN compared these responsibilities to those of Verisign[13], the American company currently operating two of the internet’s thirteen root name servers: “When you apply for a new gTLD you are applying to run a registry business. You will be responsible for a critical and highly visible piece of internet infrastructure. Just as Verisign is responsible for all the domain names registered in the .com top-level domain, so you would be responsible for all the domain names registered in your .something gTLD[14].” Additional risks were identified in unforeseen competition from unexpected sectors, and the “uncharted territory” that the new sector, with its lack of already-tested and proven business models, could entail for its pioneers.[15]

New gTLDs are just around the corner?

The first implementation within the new gTLDs programme – i.e., the actual insertion of a new TLD into the internet root to render them operational – may be happening within a few months. July 1st, 2013, has been proposed as the earliest possible date and a pilot program is currently underway. This is earlier than what had previously been anticipated, and for applicants as well as some users, it has been welcomed news; however, all dates remain tentative. In particular, ICANN has underlined – at the very moment in which a March briefing by the organisation was announcing the schedule of the first release – that priority will be given to its core mission of preserving the technical stability of the internet’s naming and addressing system, which seems to imply that the first implementation will be delayed if its broad impact cannot be thoroughly assessed or raises concerns. IT consultant and former ICANN member, Stephane Van Gelder, noted that “Security and Stability Reviews are ongoing as the program ramps up towards launch, with constant monitoring of the potential technical impact of new gTLDs going live. This will only happen once ICANN is satisfied that doing so carries no technical risk to the Internet[16].”

Earlier this year, ICANN’s Governmental Advisory Committee – the body that provides advice and input from governments to ICANN on issues of public policy, especially where there may be an interaction between ICANN’s activities and national laws, or international agreements – gave the ICANN Board its thoughts on the first batch of applications. While two applications received outright objections[17], governmental advice came for the most part in the form of “safeguards”. The Governmental Advisory Committee noted that specific categories of TLDs require additional protections or restrictions to be implemented; for example, it asked for the singular and plural versions of the same basic string not to be considered separately (e.g., .game and .games). It also requested that the signing of any new gTLD contract be dependent upon the completion of the new registrar contract currently being finalised[18].

European perplexities on content and procedures

The European Commission (EC) is not elated by the ways in which the program is being carried out, and has expressed perplexities on both the content of some applications and the procedures with which ICANN has handled government objections to new gTLDs[19]. On November 29, 2012, the EC, in the person of Linda Corugedo Steneberg, Director at the Communications Networks, Content and Technology Directorate, issued a letter to ICANN[20] with a list of 58 applications deemed problematic, including .sex, .sexy, .free, .green, .eco, .health, .doctor, .baby, .sale and .security[21].

However, the letter also pointed out that the EC’s initiative should not be considered as an Early Warning, i.e., a notice from ICANN’s Governmental Advisory Committee members that an application is seen as potentially sensitive or problematic by one or more governments[22]; instead, the listing of a new gTLD was to be considered as a signal that further discussions between the EC and the relevant applicant were necessary. The letter has also been interpreted as an implicit critique of ICANN’s procedures, pointing out that even if the Governmental Advisory Committee does not officially advise against these applications, the EC may decide to take other action against them: “the fact that the letter […] explicitly states that the warnings are definitely not official Early Warnings […] sends a worrying signal that the EC is not in the mood to play by ICANN’s rules[23].” In addition, the EC expressed its disappointment about the limited number of applications coming from developing countries, making explicit that “this is clearly an area where ICANN needs to re-focus its efforts[24].”

In the larger context of the relationship between sovereign governments and ICANN, the European Commission’s action is considered quite significant, because by explicitly opting out of the Early Warning process and naming its own list of potentially problematic gTLD applications, the EC is bypassing the Governmental Advisory Committee as ICANN’s prescribed process for governments and intergovernmental bodies to provide input on domain name policy matters. FairWinds Partners, a digital strategy consulting firm, interestingly concludes in this regard that “the European Commission brought new gTLD applications into the legal realm of legislation and policy, quietly implying that ICANN has no jurisdiction in such matters. The European Commission has sent the message that it is not within ICANN’s purview to oversee issues that impact a nation’s (or in this case, a union of nations) economy, culture, freedoms of speech and expression, or industry regulations – this power rests with the sovereign governments of those nations[25].” FairWinds further states that, in addition to echoing past criticisms of ICANN processes, the EC’s action “raises issues of adjudication: if other governments follow the European Commission’s lead or even take a step further by deeming whether or not a new gTLD is allowed to exist independently of ICANN’s assessment, who holds the ultimate authority to determine the fate of the gTLD, ICANN or the government?[26]” The letter of the EC could set a critical precedent.

ICANN weighing even more in the internet governance arena?

In a number of ways, the new gTLD programme makes ICANN even more central an actor in internet governance. By framing the programme as a promoter of competition in the domain name market, while at the same time seeking to maintain internet security and stability, ICANN’s activities and policies also have the potential, as the organisation itself underlines, to influence the way internet users find information online, or the ways in which companies arrange and display their online presence.

As a consequence, ICANN is now, more than ever, under scrutiny of international actors, of which the European Commission is a notable example. Despite claims by ICANN that “this is a not-for-profit initiative [and if] the fee collection exceeds ICANN’s expenses, the community will be consulted as to how that excess should be used[27],” there are concerns that “what can’t be overlooked today is the fact that [the new gTLDs’] unveiling will be most beneficial for big business. Companies that don’t find themselves on or anywhere near the Fortune 500 list probably don’t have hundreds of thousands of dollars set aside for a rainy day, especially if that day approaches but the forecast is mixed[28]”. The fee set by ICANN may discourage most smaller businesses for applying, while it will not be a major issue for bigger players.

Moreover, the argument is made that the actual implementation of the new gTLDs, that ICANN is pushing for July, may be premature, causing problems for the very internet security and DNS stability that ICANN is claiming to preserve. The concern comes from one of ICANN’s long-time supporters, Verisign. The company notes in a recent report[29] the little consideration ICANN has given to registry operators that will need to prepare for the changes, including dealing with security implications that may affect the working of the whole internet[30]. Verisign appears to be implying that ICANN may be using the (excessively?) speedy implementation of the new gTLDs programme to reinforce its own powerful position in the internet governance landscape – and, to pursue this primarily political objective, may maintain this “neck-breaking” schedule to the detriment of internet stability, if necessary. Will the implementation of the new gTLDs reassure those who, as Verisign, feel that the programme displays an increasingly “ICANN-centric role[31]” in the governance of a critical area of internet infrastructure? Only the close future will tell, but one thing is certain: the new gTLD programme has important implications for both the stability and security of the internet’s infrastructure, and the ways in which users experience the internet daily – from online search habits to e-commerce. As such, it should be implemented gradually and cautiously; ICANN has fifteen years of experience on which it can build to ensure that this is the case.

References

[1] http://newgtlds.icann.org/en/about/program

[2] Laura DeNardis, “The Emerging Field of Internet Governance”, in William Dutton (ed.) Oxford Handbook of Internet Studies. Oxford: Oxford University Press, 2013 [pre-print version available here http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1678343].

[3] The author has organised a recent conference on the topic at the Institute for the Study of Diplomacy, Georgetown University, http://internetphonebook.eventbrite.com/

[4] Cfr. my recent article on the Internet Policy Review, http://policyreview.info/articles/analysis/dangerous-liaisons-governments-companies-and-internet-governance

[5] Another one, perhaps the most popular among internet users, is ccTLD or “country code top-level domain”, including .us, .de and .fr.

[6] http://newgtlds.icann.org/en/applicants/customer-service/faqs/faqs-en

[7] Id.

[8] http://www.icann.org/en/news/press/kits/reveal-day-13jun12-en.htm

[9] Natasha Lomas (April 9, 2013). Donuts, A Register for New gTLDs, Raises Tens of Billions in Series B So It can Bid for More .Names. TechCrunch, http://techcrunch.com/2013/04/09/donuts-series-b/

[10] Ingrid Lunden (June 13, 2012). Icann Applicants For New TLDs Revealed As Part Of ‘Reveal Day’: The Full List. TechCrunch, http://techcrunch.com/2012/06/13/icann-applicants-for-new-tlds-revealed-the-full-list/

[11] http://newgtlds.icann.org/en/about/program

[12] http://newgtlds.icann.org/en/about/benefits-risks

[13] http://www.verisigninc.com/

[14] http://newgtlds.icann.org/en/about/benefits-risks

[15] Id.

[16] Stéphane Van Gelder (27 March 2013). First new gTLDs could be seen as early as July. NetNames, http://www.netnames.com/blog/2013/03/first-new-gtlds-could-be-seen-as-early-as-july/

[17] The two are .gcc (contested by some of the Gulf countries, claiming similarity between this string and the Gulf Cooperation Council) and .africa, submitted by DotConnectAfrica (for lack of official support by governments from the region, given to another identical application)

[18] ICANN’s Governmental Advisory Committee (11 April 2013). GAC Communique – Beijing, People’s Republic of China. https://gacweb.icann.org/download/attachments/27132037/Beijing%20Communique%20april2013_Final.pdf

[19] Kevin Murphy (November 27, 2012). Europe rejects ICANN’s authority as it warns of problems with 58 new gTLDs. Domain Incite, http://domainincite.com/11130-europe-rejects-icanns-authority-as-it-warns-of-problems-with-58-new-gtlds

[20] European Commission (November 29, 2012). Interim position of the European Commission concerning the applications for New gTLDs. http://www.icann.org/en/news/correspondence/steneberg-to-icann-board-27nov12-en

[21] Murphy, ibid.

[22] http://newgtlds.icann.org/en/applicants/gac-early-warning

[23] Murphy, ibid.

[24] European Commission, ibid. and David Goldstein (December 4, 2012) Europe Lists gTLD Applications of Concern Plus Disappointment in Developing Countries Applications (http://www.domainpulse.com/2012/12/04/eu-gtld-applications-concern/)

[25] FairWinds Partners (December 3, 2012). As the GAC’s World Turns. gTLD Strategy, http://www.gtldstrategy.com/policy-updates/as-the-gac%E2%80%99s-world-turns

[26] Id.

[27] http://newgtlds.icann.org/en/applicants/customer-service/faqs/faqs-en

[28] Jonah Berger (29 June 2011). ICANN Approves New gTLDs: SEO Implications. http://blog.performics.com/icann-approves-new-gtlds-seo-implications/

[29] United States Security and Exchange Commission (March 28, 2013). Form 8-K Current Report, Verisign, Inc. https://investor.verisign.com/secfiling.cfm?filingID=1014473-13-12&CIK=1014473

[30] Id. and Loek Essers & Grant Gross (April 2, 2013). Groups say ICANN unprepared for gTLD launch. InfoWorld, http://www.infoworld.com/t/internet/groups-say-icann-unprepared-gtld-launch-215675

[31] Id.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Freifunk [vidéo] – créateur de réseaux MESH

Freifunk, le software-communauté créateur (allemand) de réseaux décentralisés MESH revient sur son fonctionnement et son “projet” dans une vidéo très intéressante à visionner:

Voir aussi: http://wiki.freifunk.net/Kategorie:Fran%C3%A7ais

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

La richesse des communs

wealth_of_the_commons_book_cover_260Est-il temps de penser à la “richesse des communs” plutot qu’à leur tragédie? C’est l’argument des soixante-treize auteurs d’un volume ainsi intitulé, librement disponible en ligne. De l’adoption des communs en tant que nouveau paradigme, jusqu’aux politiques nécessaires à son explicitation dans la pratique, ce livre se propose comme un guide à une autre “manière d’organiser le monde”, capable de reconfigurer à la fois le marché et les institutions.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Compte-rendu de la journée d’étude sur les architectures distribuées

Journée d’étude du 11 mars 2013 au CERSA

Le droit et les architectures distribuées:

Concilier libertés et contrôle dans un espace ouvert et décentralisé

 

Présents:

François Huguet (doctorant à Telecom ParisTech), Pauline Le More (Avocate au Barreau de Paris), Melanie Dulong De Rosnay (chargée de recherche à l’ISCC, CNRS), Hervé Le Crosnier (professeur à l’Université de Caen), Felix Tréguer (doctorant à l’EHESS), Primavera De Filippi (chercheuse au CERSA, CNRS), Danièle Bourcier (directrice de recherche au CERSA, CNRS),

 

Déroulement:

 

La journée d’étude a commencé par une présentation et une discussion autour de trois cas d’études:

 

Faroo

Faroo (dont le nom est dérivé du phare d’Alexandrie) est un moteur de recherche entièrement fondé sur des technologies pair-à-pair totalement décentralisées. Bien que le nombre de pages indexées soit encore relativement faible (environ 2 milliards de pages), le moteur de recherche s’avère cependant prometteur. Né en 2007, Faroo est alimenté par plus de 2 millions et demi d’utilisateurs, qui font tourner le logiciel sur leurs dispositifs. Cela permet la création d’une infrastructure plus flexible et dynamique – avec un impact négligeable sur les ressources informatiques: la connexion n’est utilisée que lorsqu’il n’y a pas de transfert de paquets et la charge CPU est pratiquement inexistante.  De plus, Faroo fonctionne avec une table d’indexation distribuée (Distributed Hash Table), ce qui élimine toute possibilité de contrôle centralisé.

Nous nous sommes tout d’abord interrogés sur la question du “free-riding”: est-ce que le fait d’avoir une infrastructure distribuée peut porter au free-riding de l’infrastructure en réseau ? Dès lors que les utilisateurs agrègent leurs ressources afin de créer une infrastructure communautaire qui ensuite va exploiter la bande passante fournie par les FAIs, ces derniers risquent en effet de devoir engager des coûts supplémentaires en raison du trafic généré.

Nous avons ensuite réfléchi à la question de la qualité et la neutralité des résultats fournis. Les résultats des recherches sont en effet déterminés par un algorithme de recherche dont le fonctionnement est accessible à tous (Open Source). Cela a des répercussions importantes sur les résultats des recherches: d’une part, la transparence du fonctionnement permet d’assurer une neutralité algorithmique dans la classification des résultats de recherche, d’autre part, il devient plus difficile de se protéger contre les pratiques de “search engine optimization”. De plus, étant donné qu’il est encore en version beta et avec un nombre d’utilisateurs limité, Faroo a actuellement tendance à ne renvoyer qu’un nombre limité de réponses aux requêtes de recherche puisqu’il n’indexe que les sites que les utilisateurs de Faroo visitent. À cet égard, le fonctionnement de Faroo a été confronté avec le modèle de Seeks (un méta-moteur de recherche qui recueille et réordonne les résultats fournis par d’autres moteurs de recherche – tels que Bing ou Google – pour les re-proposer dans un nouvel ordre, issue d’un compromis entre tous les résultats proposés, l’expérience de l’utilisateur et les recommandations faites par ses pairs).

D’un point de vue juridique, de nombreuses questions ont aussi été soulevées, notamment en ce qui concerne la protection des données personnelles. A priori, Faroo semblerait être un service qui permet aux utilisateurs de préserver leur vie privée, puisque les requêtes sont stockées sur l’ordinateur des utilisateurs, et non pas sur un index centralisé contrôlé par une entreprise qui pourrait les exploiter pour des finalités commerciales ou les communiquer aux autorités de justice.

En ce qui concerne les responsabilités des internautes, Hervé Le Crosnier a questionné la possibilité de considérer Faroo comme un “service d’infrastructure” – et ce qui impliquerait que tout utilisateur contribuant à l’infrastructure serait en quelque sorte “responsable” du fonctionnement de ce service distribué. Pour ce qui en est du droit à la concurrence, nous avons analysé l’effet que Faroo pourrait avoir sur les oligopoles établis (Google, Bing, Yahoo, etc), et sur la possibilité pour une architecture décentralisée telle que Faroo de pénétrer dans le marché des moteurs de recherche, en dépit des effets de réseau existants. La notion de “facilité essentielle” (définie par la CJCE comme toute infrastructure indispensable pour assurer la liaison avec des clients et/ou permettre à des concurrents d’exercer leurs activités, mais dont la duplication est impossible ou déraisonnable, pour des raisons financières ou techniques) a aussi été abordée en ce qui concerne la relation entre concurrence industrielle et concurrence déloyale.

D’un point de vue plus strictement économique, nous nous sommes interrogés sur la soutenabilité de ce système à long terme. Puisque Faroo ne prévoit aucun mécanisme visant à encourager la contribution des utilisateurs au sein de l’architecture en réseau (tel que le système de “tit-for-tat” utilisé par le réseau BitTorent), comment s’assurer que tous ceux qui utilisent le service vont également y contribuer ? Il s’agit essentiellement de considérer le système non seulement comme une architecture répartie, mais aussi comme une plateforme collaborative et participative. En termes financiers, est-ce que Faroo prévoit d’obtenir des profits en échange de ce service ? Et comment les revenus pourraient être redistribués à la communauté de pairs qui contribuent au service – étant donné que la plupart des utilisateurs demeurent anonymes ?

Enfin, dans une perspective politique, nous avons réfléchi sur les discours politiques qui tournent autour de cette plateforme. Faroo a été conçu non seulement afin de proposer une alternative aux autres moteurs de recherche centralisés, mais aussi et surtout afin de préserver la vie privée et l’anonymat des utilisateurs. Faroo peut-il donc être considéré comme une architecture technique d’émancipation citoyenne, conçue pour protéger les internautes contre les abus de pouvoir de la part des entreprises prédatrices et des États qui censurent ou qui surveillent les citoyens sur le Net ?

 

TOR

Tor (The Onion Router) est un réseau distribué conçu pour faciliter la communication anonyme. Basé sur le mécanisme de routage en oignon (onion routing), Tor introduit une nouvelle couche de communication cryptographiée sur le réseau de façon à camoufler non seulement les contenus, mais aussi l’origine et la destination des communications. Le principe du routage en oignon est un mécanisme par lequel les données sont encryptées à plusieurs reprises avec des clés de cryptages différentes, pour être ensuite envoyées à plusieurs nœuds dans le réseau. Les données sont donc protégées par plusieurs couches de cryptographie, qui seront progressivement retirées par chaque nœud, jusqu’à ce qu’elles arrivent à leur destination finale où elles seront à nouveau lisibles. Afin d’assurer une communication anonyme, les données transférées sur le réseau Tor doivent passer au minimum par trois routeurs différents. L’origine des communications n’est connue que des routeurs d’entrée, et la destination des communications n’est connue que des routeurs de sortie. Les autres nœuds du réseau sont des routeurs intermédiaires qui ne connaissent ni la provenance ni la destination des paquets qu’ils transfèrent.

Nous avons commencé par analyser plus en détail le fonctionnement technique du réseau TOR. La communication au sein du réseau est fondée sur 3 couches de routage: avant de rejoindre le site de destination, chaque paquet passe par un premier nœud (qui connait l’adresse IP d’origine), un deuxième nœud (qui ne connait ni la provenance, ni la destination des paquets), et un troisième nœud (qui connait uniquement la destination des paquets). Chaque paquet est crypté avec une clef correspondant à chacun des 3 nœuds intermédiaires, afin que aucun de ces nœuds ne puisse connaître à la fois l’origine et la destination des paquets.

Nous nous sommes ensuite concentrés sur les applications de cette technologie, qui – malgré sa neutralité – est souvent utilisée pour des finalités illégales. Nous avons analysé notamment le cas de Silkroad: une place de marché sur Internet qui a pour particularité d’utiliser le réseau Tor pour assurer l’anonymat à la fois des acheteurs et des vendeurs. Un autre élément permettant l’anonymat se situe dans l’utilisation quasi-exclusive sur le site de Bitcoin, une monnaie électronique dont la possession n’est pas nominative et qui est complètement séparée du système bancaire international. Bien qu’en théorie on puisse acheter et vendre n’importe quelle marchandise sur le site, l’anonymat même relatif du service fait qu’il est utilisé essentiellement pour vendre ou acheter ce qui ne peut que difficilement être acheté ou vendu légalement, notamment desstupéfiants et des armes (voir cet article: Online drug dealers back on Silk Road after mysterious two-week outage).

Cela nous a mené à réfléchir sur les enjeux juridiques soulevés par TOR, qui présente de nombreux avantages et fonctionnalités qui ne sont pas illégales en tant que telles, mais qui entraînent cependant un besoin pour l’État de surveiller les communications qui passent sur le réseau afin de pouvoir assurer l’ordre public. En cas d’utilisation illégale, tel que par exemple le téléchargement d’images pédophiles, la police va récupérer l’adresse IP qui s’est connectée au serveur et va assumer qu’il s’agit de l’adresse du visiteur en question – bien qu’il s’agisse en fait de l’adresse du nœud de sortie. C’est pour cette raison que TOR maintient une liste de tous les nœuds de sorties disponibles à chaque instant.

Le dilemme est donc le suivant : si les nœuds de sortie ne sont pas considérés responsables pour le trafic qu’ils transfèrent, tous les criminels vont être amenés à se présenter comme nœud de sortie afin de pouvoir effectuer des activités illégales sans devoir se cacher. Inversement, si les nœuds de sorties sont considérés responsables du trafic qui leur passe à travers, plus personne ne se proposera en tant que nœud de sortie, et le réseau TOR sera donc amené à mourir.

Pour répondre à ce dilemme, il est tout d’abord nécessaire de distinguer entre le rôle actif ou passif des intermédiaires. Un intermédiaire passif (tel qu’un simple canal qui transfère tout simplement des paquets) est soumis à un régime de responsabilité limitée. Un intermédiaire engagé qui a un rôle plus actif sera par contre soumis à un régime de responsabilité plus stricte. Les architectures décentralisées telles que le réseau TOR utilisent les utilisateurs du réseau comme des simple tuyaux. Le problème c’est que en ne regardant que les logs, il est impossible de déterminer si le nœud de sortie avait un rôle actif ou passif dans la communication.

Nous nous sommes donc interrogés tout d’abord sur la question de la responsabilité juridique des FAI et des hébergeurs – pour étendre ensuite la question à l’usage des VPN et des proxys, pour savoir s’ils sont tenus responsable des activités illicites de leur utilisateurs. Tout d’abord, nous avons remarqué que, alors que – d’après la Directive Européenne sur la conservation des données – les FAIs sont soumis à l’obligation de conserver les données de leurs utilisateurs pendant plusieurs années, afin de les dévoiler aux autorités judiciaires en cas de besoin, cette obligation ne s’étend pas aux VPN qui ne sont pas soumis à la directive et qui ne sont donc a priori pas obligés de logger les communications qu’ils transfèrent. De plus, il n’y a pas actuellement d’obligation particulière de surveiller les communications pour les intermédiaires qui se comportent en tant que “mère conduits” (tel que définis par la directive eCommerce et article. L32-3-3 du Code des postes et télécoms français) – et que l’application d’une couche de chiffrement (ou déchiffrement) ne remet pas en cause ce statut qui est purement technique. Dans le cas d’un VPN, il est possible que l’opérateur du VPN soit contraint à collaborer pour faire identifier l’auteur de communications litigieuses (e.g. en demandant les logs) même si ce cas de figure n’est pas explicitement reconnu dans la Directive eCommerce. De même, un proxy d’anonymisation pourrait être visé par une action en cessation – afin de #bloquer, par exemple, certaines informations en provenance ou à destination d’une adresse IP spécifique. Le manque de jurisprudence rend la spéculation difficile, mais cela pourrait conduire à contraindre un nœud Tor de cesser complètement son activité. Enfin, en ce qui concerne le régime de responsabilité des individus qui agissent en tant qu’intermédiaires sur Internet, la situation change à selon des pays. En France, par exemple, la loi HADOPI considère que toute personne qui ne sécurise pas sa connexion commet un délit de négligence, alors qu’aux États-Unis et en Allemagne, les titulaires de connexion Internet sont exemptés de responsabilité s’ils arrivent à prouver que les activités illégales ont été commises par une autre personne.

La difficulté principale est due au fait qu’il est très difficile d’établir un compromis entre anonymat et sécurité. Il s’agit d’une dichotomie qui ne peut être résolue que par une réponse binaire: veut-on contrôler toutes les communications sur le réseau ou bien accepter la liberté de communications anonymes? Si on veut préserver l’anonymat, il faut renoncer à réguler les communications (en éliminant des lois telles que le droit d’auteur ou le loi contre la diffamation) et sauvegarder le rôle des intermédiaire passifs, au risque de faciliter l’activité de certain criminels. Si on ne veut pas autoriser l’anonymat, il faut dans ce cas responsabiliser tous les intermédiaires, détruire la structure décentralisée du réseau et établir des points de contrôle fiables et centralisés (tels que les FAIs, les hébergeurs).

Enfin, nous avons analysé la possibilité de détourner le réseau Tor, afin d’espionner les communications des utilisateurs du réseau. Il est en effet possible pour le nœud de sortie de savoir exactement à quoi les gens se connectent et quel est le contenu de leurs communications (dans la mesure où elles n’ont pas été cryptées préalablement).

Pour terminer, Hervé Le Crosnier nous a expliqué le mythe de l’utopie numérique, l’idée de pouvoir échapper à toute forme de contrôle grâce aux technologies. En combinant les technologies de cryptage, Tor, Bitcoin, etc, il est possible d’assurer un niveau très élevé d’anonymat, mais ces technologies ne vont protéger que l’émetteur, et non pas les correspondants ou les récepteurs. De plus, faut-il toujours faire confiance aux nœuds d’entrée sur le réseau Tor ? Comment éviter l’infiltration du réseau par des services secrets ou des cyberflics? S’il est vrai qu’il est souvent difficile d’appliquer les règles de droit sur un réseau de nature transnationale tel qu’Internet, la technologie à elle seule ne peut pas non plus réguler le réseau. Il faut se munir de plusieurs outils de protection, mais la sécurité ne peut être assurée que par l’utilisation pratique de ces outils dans un contexte social.

En conclusion, nous avons rappelé que la régulation du réseau ne se fait pas uniquement par des normes juridiques ou techniques, mais aussi par l’intermédiaire de normes sociales. Ces différents systèmes de normes interagissent les uns avec les autres. L’avancée des technologies numériques demande une modification des lois, mais les réformes du droit poussent aussi au développement de nouveaux outils technologiques. En effet, la production de nouvelles règles de  droit peut avoir des effets inattendus: plus elles essaient de contrôler le réseau, plus sera encouragé le développement de réseaux décentralisés visant justement à échapper à ce contrôle. En France, par exemple, la loi HADOPI a poussé de nombreux internautes à utiliser des réseaux pair à pair pour le partage des fichiers en passant par des VPN ou par le réseau TOR. Ces technologies rendent le travail de la police et des services secrets encore plus difficile, notamment lorsqu’il s’agit d’arrêter de vrais criminels ou des terroristes. Il est donc important d’introduire plus de souplesse pour les infractions les moins graves afin de ne pas encourager les infrastructures décentralisées à se développer de manière encore plus sophistiquées pour la réalisation d’infractions encore plus graves.

 commotion_kbabout_measure-03

Commotion

Commotion Wireless est un projet open source basé sur la construction d’un réseau maillé non hierarchisé (topologie mesh-network), Dans ce type de réseau (filaire ou non) tous les hôtes forment ainsi une structure en forme de filet. Chaque nœud doit recevoir, envoyer et relayer les données.  Si un hôte est hors service, ses voisins passeront par une autre route. Un réseau maillé peut relayer les données à l’instar d’un déluge ou un itinéraire, mais dans le second cas le réseau doit prévoir des connexions sans interruption ou calculer une déviation dont les communications sont cryptées, et dont l’aspect décentralisé est censé compliquer toute surveillance (y compris politique).

Contrairement aux connexions fixes, les utilisateurs n’ont pas besoin de s’engager auprès d’un FAI pour participer au réseau, et ils sont seulement identifiables par le matériel utilisé. Chaque appareil connecté au réseau est un nœud du réseau. Il émet et reçoit des informations via ondes wi-fi, et propage ainsi l’étendue du réseau. Le but de ce réseau mesh est de fournir un moyen de communication anonyme, décentralisé, sécurisé, et incontrôlable. Commotion permet de constituer des réseaux mesh autonomes à partir de terminaux fonctionnant sur les fréquences Wi-fi. Bien qu’aucune connexion à un autre réseau (internet par exemple) ne soit nécessaire, tout ordinateurs, téléphones mobiles et smartphones ou autre dispositifs connectés à Internet peuvent en faire profiter l’ensemble du réseau.  Semblable à d’autres réseaux mesh, tels que openmeshproject , Commotion se distingue par le fait qu’il a obtenu 2,3 M$ de Google et 2 M$ de l’état américain.

Ainsi, à différence des deux dispositifs présentés auparavant – qui nécessitent un point de connexion centralisé à un FAI – il s’agit ici d’une forme de décentralisation totale: non seulement au niveau logiciel ou numérique, mais aussi au niveau de l’infrastructure du réseau. Les utilisateurs du réseau mesh n’ont pas besoin de se connecter à un FAI, ils peuvent se connecter directement à leurs pairs. De plus, pour des raisons de sécurité, il est prévu qu’un système de chiffrement soit également mis en place afin d’assurer la protection et l’anonymat des données.

Ainsi, Commotion serait révélateur de cette fuite en avant pour échapper à la surveillance croissante des États. Alors qu’avec TOR, il y a toujours le risque qu’un FAI gère les communications en sortie de réseau, dans le cas du réseau mesh, les utilisateurs peuvent échapper à toute surveillance en évitant directement les FAIs. En effet, bien qu’un gateway soit nécessaire pour se connecter à Internet, il est possible de créer des sous-réseaux communautaires qui seront invisible aux internautes.

Or, en ce qui concerne les applications de Commotion, il a été jusqu’à présent utilisé essentiellement pour fournir des connections Internet dans des pays ou des quartiers pauvres, ou là où des catastrophes naturelles ont détruit les infrastructures du réseau (e.g. Fukushima, Haiti, ouragan Sandy). Commotion est utilisé aussi comme un projet communautaire avec un objectif pédagogique.

D’un point de vue technique, l’architecture mesh du réseau est en quelque sorte une reconstitution du réseau Internet tel qu’il avait été originellement conçu, comme un outil militaire qui peut se ré-organiser de manière dynamique selon des disponibilités d’infrastructures. L’avantage du réseau mesh par rapport au réseau Internet est qu’il est indépendant, plus robuste et résilient.

Nous nous sommes ensuite intéressés à l’applicabilité du droit sur ce type de réseaux. S’il n’y a plus d’entité centralisé (tel que les FAIs) qui peuvent gérer les communications, la surveillance doit se faire au niveau du dispositif. Il s’agit des technologies de trusted computing qui permettent de tracer et de déterminer la légitimité de chaque action des utilisateurs. Cela se retrouve dans la manière dont l’architecture d’Internet est en train d’évoluer, se transformant progressivement en un réseau de plus en plus intégré verticalement – avec un contrôle qui va du cloud computing au dispositif des internautes.

Pour conclure, nous prévoyons un développement rapide des réseaux mesh dans les années à venir, avec la multiplication des capteurs et des émetteurs qui peuvent être stratégiquement positionnés dans la ville, et le déploiement de dispositifs mobiles de plus en plus petits qui peuvent se transporter facilement (par ex.. wearable computing).

Le problème est que le déploiment d’un réseau mesh de grande dimension ne pourra se faire que si l’État décide de libérer les fréquences blanches (fréquences actuellement non utilisées) car les fréquences Wifi seront rapidement saturées. En parallèle avec les radios libres, la libéralisation du spectre a permis des évolutions politiques importantes, bien que aujourd’hui – 20 ans plus tard – il y a eu de nombreuses déceptions par rapport aux attentes initiales.

Felix Tréguer nous informe que, au niveau européen, il n’y a qu’une très faible prise de conscience des enjeux économiques et politiques liés à la libéralisation des fréquences radios. Alors que le Parlement est relativement sensible à l’utilisation des fréquences blanches pour réduire la fracture numérique, la Commission se place plutôt dans une logique d’ouverture orientées vers le partage des fréquences entre les grands opérateurs. Bien que la Commission soit sensible aux arguments sur la nécessité d’accroître l’accès à Internet, elle s’oriente uniquement d’un point de vue économique, dans le but de créer un grand marché numérique pour relier les internautes aux grandes plateformes commerciales. D’après lui,  le problème est lié au fait que les personnes qui s’occupent de ces questions sont des ingénieurs qui ne comprennent pas toujours suffisamment l’importance de se mobiliser au plan politique. Inversement, il est parfois difficile pour les membres de la société civile mieux insérés dans les milieux institutionnels de maîtriser l’ensemble des enjeux techniques.

 

Liaisons dangereuses? Gouvernements, entreprises et gouvernance d’Internet

Cet article, en anglais, est également paru sur la nouvelle-née Internet Policy Review. Merci a Frédéric Dubois, Uta Meier-Hahn, Primavera de Filippi et Joris van Hoboken pour leurs relectures.

Dangerous Liaisons? Governments, companies and Internet governance

18 Feb 2013 by Francesca Musiani

Private actors in the information technology sector are currently playing an increasingly important role in content mediation, as well as in regulation of online forms of expression, with implications for both internet rights and economic freedom.

The latest Google Transparency Report (Google, 2013) released on January 24, 2013, sends a clear and somewhat disquieting message to the advocates of a more transparent internet governance worldwide. Several governments in the European Union are submitting a steadily increasing number of requests to the giant of online information search, with two purposes: the acquisition of several types of sensitive information about internet users – including their IP addresses, browsing and navigation history, and email communications – and removal of specific content. This “dramatic” (EDRi-gram, 2013) increase raises questions about the very nature of the relationship, or partnership, between political institutions and the ‘majors’ of the IT sector. This is true for online privacy, but also for the legitimacy and transparency of the net’s gatekeepers and overall, for internet governance – an ongoing multi-stakeholder development of shared principles, norms, rules, decision making procedures and programmes, that shape the evolution and utilisation of the internet.

Privatisation of internet governance

Quoted by the BBC, Privacy International’s head of international advocacy, Carly Nyst, says: “The information we hand over to companies like Google paints a detailed picture of who we are – from our political and religious views to our friendships, associations and locations. Governments must stop treating the user data held by corporations as a treasure trove of information they can mine whenever they please, with little or no judicial authorisation.” (BBC, 2013) While privacy concerns that may derive from third party access of such information are perhaps the first that come to mind, this article focusses mainly on the second most relevant result from the report: the fact that private companies such as Google are increasingly mobilised or solicited by governments to act as content mediators and de facto core actors in internet governance. The Report’s statistics are the latest – but not the last – illustration of how internet governance is increasingly being handled by private companies, and of industry’s “heightened role in regulating content and governing expression as well as responding to restrictions on expression” (DeNardis, 2012), most often under mandate or instructions of governments.

This phenomenon, which internet governance scholar Laura DeNardis has recently and concisely described as the “privatisation of internet governance” (DeNardis, 2010), is not a new dynamic, inasmuch as industry and the technical community have always played a fundamental role in how the internet is designed and managed, be it by contributing to standard-setting or infrastructure management. However, in a scenario in which users are taking advantage of increasingly sophisticated technology, the centralisation and concentration characterising today’s most widespread internet services are contributing to the accentuation of this tendency: “a small number of internet service providers concentrate a large part of the people’s online activities and time, their personal data and social networks, they exercise a considerable power on their users through the mere application of their terms of uses. […] Some services even play roles that used to be the monopoly of states, such as guaranteeing their users’ identity or maintaining social order online” (Arsène, 2012).

Indeed, as the journalist and co-founder of Global Voices Online, Rebecca MacKinnon, has pointed out, the millions of users – and the pervasiveness in their lives – of the Googles and Facebooks of today make these companies comparable to virtual “countries”, traditionally coinciding with the scope and jurisdiction of nation states (MacKinnon, 2012). There are indications that “various types of private ordering increasingly perform internet governance functions […] Private industry internet governance often takes place at the level of infrastructure management, an area fairly invisible to the public. In other areas, the role of private industry in ordering the flow of information over the internet is much more visible and well understood” (DeNardis, 2010). This is certainly the case for content removal actions, requested by governments or intergovernmental organisations and enacted by private companies, which constitute most of Google’s Transparency Report – and prominently feature EU states.

Internet intermediaries as information gatekeepers

The extent of the control (and as a consequence, the responsibility vis-à-vis the users) of the great information services, intermediaries and gatekeepers of today’s internet on user-generated, online-published content has been particularly evident in September 2012, when Google, owner of the very popular video streaming service YouTube, decided to block access to the infamous video Innocence of Muslims, ridiculing the Prophet Muhammad, in two of the countries that have experienced severe upheavals, Egypt and Libya – while at the same time choosing not to remove it completely from its website. In that occasion, Peter Spiro, a professor of law at Temple University, in the United States, had declared to the New York Times: “Google is the world’s gatekeeper for information, so if Google wants to define the First Amendment to exclude this sort of material then there’s not a lot the rest of the world can do about it [and] it makes this episode an even more significant one if Google broadens the block” (Cain Miller, 2012). Indeed, in this prominent case and in all of those mentioned in the recent transparency report, Google’s actions as a content mediator demonstrate the “court-of-law-like powers of internet information intermediaries, de facto able to decide what content remains public, and what is taken out” (Musiani, 2012).

The crucial role of the private sector in internet governance has also come to the attention of the media on the occasion of the 2010 WikiLeaks U.S. diplomatic cables controversy. EveryDNS suspended its domain name resolution service, impeding the correct functioning of one of the internet’s “phone books” for the WikiLeaks website and thus effectively preventing the non-technically savvy public around the world from accessing it, eliciting reactions such as “This has come about through the actions of the U.S. Government. The government’s statements about Wikileaks have forced companies to analyze their Terms of Service […] In the name of security, our government has decided to force others to block information that they fear would have terrible, terrible impacts” (Williams, 2010).

Amazon blocked its hosting of the wikileaks.org website after an inquiry by the U.S. Senate Homeland Security Committee, prompting Ryan Calo, a lecturer at Stanford University’s Center for Internet and Society, to declare to Reuters that under U.S. law, Amazon would likely have been shielded from any possible prosecution by the government: “It would set a dangerous precedent were companies like Amazon to take down things merely because the senator or another government entity started to ask questions about them” (Pelofsky, 2010).

Online financial services firms PayPal, Visa and MasterCard blocked the financial flow of money to WikiLeaks.

Taken together, these measures de facto cancelled WikiLeaks’ visibility on the internet and prevented public access to ‘leaked’ content. Regardless of the delicate or classified status of that particular content, the WikiLeaks case raises broader questions about the amount of control that internet companies can exert alongside governments on what should, and should not, be publicly and freely expressed online, and highlights the difficult positions that content platforms are sometimes put in – because of the responsibility that their “privileged gatekeeper status” entails.

Informal dimensions of internet governance

The increasing importance of the private sector in internet governance should not be exclusively read in terms of its potential restrictions to freedom of expression, threats to internet users’ privacy, and accountability to the public. The role of private industry is also, and historically, a positive one of market-spurred innovation and technical effectiveness in networked distribution, communication and interaction (DeNardis, 2010).

At the same time, “network providers deploy network-address translators that let several devices share the same address. To protect their networks against attacks, organizations put firewalls that block potentially harmful applications at the borders of their private networks. To increase their profits, network providers use technologies that enable them to identify and control the applications and the content passing through their networks” (Van Schewick, 2011).

All of these actions, in addition to the uses of internet infrastructure for content mediation, as discussed above, are an integral part of internet governance, albeit a more informal one. These private actions are often de-prioritised in internet governance discourse to make way for more institutionalised or forms of governance that are politically more traditional: governments, international and supranational organisations, multi-stakeholder fora, and organised civil society.

As information studies scholar Michel van Eeten interestingly pointed out, “A very large part of the internet governance field’s scholarly literature tends to focus almost exclusively on these formal international institutions involved in explicit discussions of the global governance of the internet. On the other hand, the term ‘internet governance’ is not normally applied to studies of many real-world activities and problems that play a crucial role in shaping and regulating the way the internet really works” (Van Eeten, 2009).

Internet governance discourse in Europe

A quick ‘mapping’ glance at internet governance at the European regional level seems to confirm the institutional orientation in the political and scholarly treatment of the internet governance field.

An online search on “Europe” and “internet governance” on Google and Yahoo!, for example, returns query results that are heavily centred on the activities of the Council of Europe (CoE) in this field; activities that are framed in the broader macro-area of ‘Human Rights and the Rule of Law’, implicitly according priority to the ways in which the existing and established international law and human rights system is addressing these emerging topics (CoE, 2013a). Zooming in some more, the most recent news on display concerns the CoE’s collaborations with other international or supranational organisations, from the United Nations Educational, Scientific and Cultural Organization (UNESCO) to the Internet Corporation for Assigned Names and Numbers (ICANN), from the Internet Governance Forum (IGF) to the European Broadcasting Union (EBU) (CoEa, 2013).

One needs to go back to April 2012 to find a notice that implicitly addresses private governance: the CoE’s adoption of recommendations for search engines to “increase transparency in the way access to information is provided, in particular the criteria used to select, rank or remove search results” (CoE, 2013b). The European Dialogue on Internet Governance, an annual convening of an “informal and inclusive discussion and exchange on public policy issues related to Internet Governance (IG) between stakeholders from all over Europe” (IFLA, 2013), whose next meeting will be held in Lisbon, Portugal on June 20 and 21 (EuroDIG, 2013), is also prominently featured in search results on European internet governance, as well as the core European Union governing instances, the Parliament and the Commission.

Internet governance to make dangerous liaisons explicit

The traditional instruments of the international political and legal system, such as conventions, treaties, charts and intergovernmental agreements, certainly hold a crucial role in internet governance, both as an arena of political experimentation and a field of study. Still, the “inherently political” qualities of search engine algorithm development, video content removals, blocking of domain names – actions that originate and rest with the private sector’s handling of the internet’s infrastructure – should not be neglected in our assessment of the field of internet governance today.

This is of particular importance, as these actions are often enacted in collaboration or under the mandate of governmental and supranational institutions, in dangerous liaisons whose details, conventions and compromises often escape the public radar. As legal scholar Kevin Werbach reminds us, quite originally within his field, “Two forces are in tension as the internet evolves. One pushes toward interconnected common platforms; the other pulls toward fragmentation and proprietary alternatives. Their interplay drives many of the contentious issues in cyberlaw, intellectual property, and telecommunications policy, including the fight over ‘network neutrality’ for broadband providers, debates over global internet governance, and battles over copyright online” (Werbach, 2009).

With possible implications for the resurgence of proprietary values, the diminishment of internet governance transparency, and the use of internet governance techniques for a competitive advantage on the market (DeNardis, 2012), the private sector is today at the crossroads of this tension, and plays a crucial role in how it will unfold in the close-future internet governance at the local, regional and global levels.

Bibliography

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website