Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Production entre pairs, argent et valeur

Le numéro 4 du Journal of Peer Production, sur le thème de l’argent et de la valeur, vient d’être publié. Il est coordonné par Nathaniel Tkacz, Nicolas Mendoza et moi même, et inclut deux contributions par des membres du projet ADAM. Alexandre Mallard, Cécile Méadel et moi explorons le rôle performatif de la connaissance experte dans la construction d’une “confiance distribuée” pour le système de monnaie électronique décentralisé Bitcoin, tandis que Primavera De Filippi, en collaboration avec Miguel Said Vieira de l’université de São Paulo, propose un système de licences plus adaptée à l’économie des communs.

La totalité du numéro est librement disponible ici. Quelques extraits de l’introduction:

“Peer production has often been described as a ‘third mode of production’, irreducible to State or market imperatives. The creation and organisation of peer projects allegedly take place without ‘managerial commands’ or ‘price signals’, without recourse to bureaucratic apparatuses or the logic of competitive markets. Instead, and mimicking the technical architectures upon which many peer projects are based, production is described as non-hierarchical and decentralised. Group dynamics are also commonly described as ‘flat’ and this is captured, of course, in the very notion of the ‘peer’. When tested against the realities of actual projects, however, such early conceptions of peer production are, at best, in need of further elaboration and qualification. At worst, they were always off the mark. Hierarchies persist in peer production, as does competition and market-like arrangements. But perhaps it is the qualities of these new hierarchies and competitive forms that is novel. After all, liberal democracies, dictatorships, corporations, local sports clubs, and families all have their hierarchies but none is reducible to the others.

In the context of earlier understandings of peer production, the question of value and even more of currency has been rather marginal. This issue of the Journal of Peer Production (JoPP) demonstrates that theories and practices of value and currency are moving into the foreground. There has been a veritable explosion of experiments with currency and also a continuing metrics creep in many peer projects and beyond. More fundamentally, though, the question of value and how it circulates through a collective body is central to any mature theory of social organisation. In sociological and economic thought, the historical distinction between ‘values’ and ‘value’ split the non- or at least less-easily-calculable with the seemingly cold and objective world of calculation and universal commensurability. This ‘old settlement’, which never really held, nevertheless helped demarcate the economic from the social. But the intensification and extension of computational processes, manifested most clearly in the rise of big data, has lead to a proliferation of bottom-up procedures to formalise (social) values, rendering them easily calculable and lending order to the decentralised world of peers, but without necessarily replicating capitalistic calculations of value. […]

In this issue we seek to advance the exploration and understanding of how the themes of value and currency intersect peer production. This objective presented a double challenge for the contributors and for us as editors. Indeed, the scholarly articles included in this issue have attempted to provide analytical and theoretically grounded investigations of a world that is, on the one hand, often developing more quickly than the academic publication process can account for in a timely way, and on the other hand, mostly shaped by expert-practitioners. At the same time, these contributions seek to engage not only with scholars of related issues within the academic community, but also with practitioners themselves — who, on their end, have demonstrated a strong interest in this dialogue, as the invited comments section shows.”

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Histoire d’un ‘cloud P2P’

Alors que de nouveaux acteurs tels que Bitcloud et Maidsafe se présentent avec des solutions P2P sur la scène très peuplée du cloud, voici mon dernier article pour l’Internet Policy Review, qui raconte l’histoire d’un ‘cloud P2P’ d’il y a quelques années.

 

Decentralised internet governance: the case of a ‘peer-to-peer cloud’

13 February 2014

The architecture of a networked system is its underlying technical structure – its logical and structural layout. In my last article for the Internet Policy Review (Musiani, 2013a), I have built upon the work of several authors in science and technology studies, economics, law and computer science (e.g. Star, 1999; van Schewick, 2010; Elkin-Koren, 2006; Agre, 2003) to discuss the idea of network architecture as internet governance. I have suggested that, by changing the design of the networks subtending internet-based services and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

This article retraces the early stages of development of a ‘peer-to-peer cloud’ storage service, Drizzle, with the aim of providing an example of decentralised network architecture as internet governance ‘in practice’. More specifically, the paper sheds light on how changes in the architectural design of networked services affect the circulation, storage and privacy of data, as well as the rights and responsibilities exerted by different actors on them. This article does not mean to be a compendium of the implications of the decentralisation option in building a cloud platform, which entails a number of technical complications as well as advantages, including how to ensure the reliability and redundancy of data, and the soundness of the encryption mechanism. However, the privacy-related design choices described here are some of the many possible ways to illustrate the extent to which changes in network architecture are, indeed, changes to network governance.

Decentralising the cloud

In early 2007, when Drizzle first sees the light, the industry of online data storage – a service allowing users to store, save and share data on one or several terminals connected to the internet – has “never felt better” (Guerrini, 2010). Google, Amazon, Microsoft and Oracle, to name but a few, propose their storage platforms, each with its specificities and one common denominator: the ‘cloud’. According to this model, the service provider is in charge of both the physical infrastructure and the software. Thus, the service provider hosts applications and data at once – in a location, and according to modalities, unknown or at best ambiguous to the user (Mowbray, 2009). The so-called ‘server farms’ proliferate, to support and manage this increasing remoteness of data from users and users’ terminals.

In this context, Drizzle1, a small start-up founded by two developers and computer programmers who we will call Dietrich and Kurt, makes an unusual foundational decision: its cloud storage platform will mainly be composed – alongside more ‘classical’ data centres – of portions of the users’ hard disks, directly linked in a peer-to-peer, decentralised network architecture (Schollmeier, 2001; Taylor & Harrison, 2009). This choice entails a number of peculiar features. On the one hand, the implementation of a technical process defined as “encrypted fragmentation”2, which consists in encrypting locally – on the user’s computer, and by means of a previously installed Drizzle P2P client – the content that will be stored. The content is then divided into fragments, duplicated to ensure redundancy, and spread out to the network. In return, users need to accept to ‘pool’ – put at the disposal of other users and their computers – the computational and material resources necessary for the operations related to the storage of content. As the service’s terms of use point out:

“The user acknowledges that Drizzle may use processor, bandwidth and hard disk (or other storage media) of his computer for the purpose of storing, encrypting, caching and serving data that has been stored in Drizzle by the user or any other users. The user can specify the extent to which local resources are used in the settings of the Drizzle client software. The amount of resources the user is allowed to use in Drizzle depends on the amount of local resources the user is contributing to Drizzle.”

The interdependent and egalitarian model subtending the platform will allow its users to barter their local disk space with an equivalent space in the decentralised cloud, thereby improving the quality of this storage space, which will become permanently available and accessible. By shaping their decentralised storage service, the developers of Drizzle carry on a double experimentation: with the frontier between centralisation and decentralisation, and with sharing modalities that blend peer-to-peer, social networking and the cloud.

Peer-to-peer storage: the cloud meets privacy by design

“In 2007, it was all starting to get social,” Dietrich recalls three years later. Indeed, social media, Facebook and Twitter in particular, were at that moment entering the daily life of millions of internet users in an increasingly pervasive way. Drizzle’s first steps are taken in a community of research and development that tries to counter the social media “explosion” by developing P2P systems as an alternative to a variety of internet-based services, including social networks,  structured in a centralised manner (Le Fessant, 2009; Musiani, 2010a; Musiani, 2010b).

In 2007, Facebook had been in existence for three years. Millions of users had taken part in it, thereby contributing to the massive success of these Web-based services that allow individuals to build a public or semi-public profile within a system, define a list of other users with whom to interact, and see/browse the list of their and others’ connections made in ‘public mode’ within the system (Boyd & Ellison, 2007). In parallel to their spectacular growth, social networks raise vibrant discussions and controversies, both within the expert community and among the general public. The ways in which social networking service providers leverage personal information and user data remains controversial, since they sometimes mean allowing external applications to access them, while on other occasions they pursue direct commercial purposes (Boyd, 2008). The rise of the so-called cloud does nothing to mitigate the impression of risk for informed users, as applications and data are increasingly hosted in locations and ways unknown or at best ambiguous. User exposure on social networking sites and on cloud-based services positions privacy, more than ever, at the foreground of discussions.

In this context, several developers – including Drizzle’s – identify in a peer-to-peer type of network architecture a possible way of approaching the protection of personal data privacy with a different angle: through the relocation and “re-appropriation” of data within the terminals of users, who would be able to host their own profiles and the information they contain (see also Moglen, 2010; Aigrain, 2010, 2011).

As in the development of Drizzle, a conception of privacy and confidentiality of personal data, which is conceived of and enforced via technical means – called privacy by design (Cavoukian, 2010; Schaar, 2010), is at work. This conceptualisation of privacy is defined by means of the constraints and the opportunities linked to the treatment and the location of data, according to the different moments and the variety of operations taking place within the system. In particular, the confidentiality of data (personal data as well as the content stored in the P2P cloud) is defined by a peculiar role and enhanced features attributed to the password that identifies the user vis-à-vis the network, and by the implementation of the resource allocation system on which Drizzle is based.

Password and user responsibility

In Dietrich’s intentions, the role of the user-selected and user–generated password for the Drizzle system should have “stri[cken] the user as soon as he had access to the system for the very first time.” Indeed, the virtual form that is served to users upon subscription may come as a surprise: it informs that

“We do not know your password as it never leaves your computer. Please, do not forget your password and use, if needed, your password hint.”

The status of the password is thus negotiated, beyond its usual meaning of unique identifier vis-à-vis the system, to define, detail and legitimise the process of local encryption and decryption of data within the Drizzle system. This feature comes to symbolise the specificity of Drizzle’s promise of security and privacy as well as users’ trust, as it becomes the symbol and the graphical representation of the ‘local’ dimension of the encryption process – as it never leaves the computer of the user who created it. The operations, for the most part automatically managed, that are linked to the protection of personal data are thus hosted on the terminals of users. Indeed, this entails a modification of the user’s role within the service’s architecture: node among equal nodes, it becomes a server itself, instead of a starting point and a final point for operations that are otherwise conducted on another machine or group of machines.

Through the attribution of this status to the password, the developers of Drizzle are also proposing an alternative to the balance between the rights exerted by users on their own data and the rights acquired by the service provider on these same data – a balance that is usually heavily bent on the provider’s side. However, this reconfiguration in the balance of rights comes with a trade-off. As the password stays with the user and is not sent to the servers controlled by the firm, the latter cannot retrieve the password if needed. Thus, users do not only see their privacy reinforced, but at the same time and for the same reasons, the responsibility for their actions is augmented – while the service provider renounces to some of its control on the content that circulates thanks to the service it manages. The meaning of this ‘renunciation’, Dietrich explains, is double: on the one hand, the Drizzle team wishes to make it evident, almost translate into a specific object the user can easily relate to, the ‘obscure’ and unfamiliar process of client-side encryption, which is an ongoing source of controversies and perplexities. On the other hand, it is also a matter of Drizzle’s business model: the more the firm knows about its users, the more it is mandatory for it to submit the users to regular surveillance and control – and this requires an investment of material resources and time that, in its first phases of existence, the firm does not have:

“If we can know what is in your account, starting with your password, we have heightened obligations to police the content and to make sure nobody can eavesdrop on the traffic.”

Data privacy and resource allocation

Another aspect that contributes to define rights and responsibilities is the detailing of the conditions for allocation and management of the computational resources provided by the different computers participating in the system.

As briefly described above, the choice to decentralise the platform makes it necessary, due to the very particular status of the resources used by the system, to detail several aspects in the terms of use: the role of computers belonging to users, the types of resources that Drizzle is able to use, their purpose. It also becomes necessary to detail the extent to which users are able to decide – and communicate to their P2P client, thus to the system – the maximum quantity of local resources that the rest of the network/storage system can use. However, it is also necessary to define the articulation between the availability of resources and the different operations to which these resources will be destined to within the system.

The articulation of these two aspects has important implications for the confidentiality of data circulating in the system (both personal information and content stored by users). Several users, giving feedback to the developers in the early stages of the system, warn that the resource allocation process could be framed as a possible ‘surveillance’ or ‘monitoring’ of these resources, in a way that can potentially be highly automatised, invasive, privacy-threatening.

After a discussion between these concerned users and the developers, via the Drizzle forum, two modifications were applied to the terms of use: while the general terms now state that “resources are allocated and monitored in accordance with the Privacy Policy,” the privacy policy itself details the extent of automation and pervasiveness of the system that allocates and monitors resources:

“In order to ensure a fair allocation of resources within Drizzle, various data about the computers participating in the Drizzle network is collected. This data includes their IP addresses, disposability and the amount of resources they are contributing (e.g. bandwidth, memory). […] Drizzle keeps track of how much storage space you have used and earned […] Drizzle collects statistical information for the purposes of monitoring, debugging and improving the system. This includes automatically generated problem, performance, network analysis and general usage reports, as well as logs of the connections and queries made to Drizzle’s servers (including the involved IP addresses), as well as analytical data about the usage of the Drizzle website. However, none of this data contains information from your private or shared files.”

Thus, the correct functioning of the allocation system indeed implies the gathering of several pieces of information concerning the material, computational and memory resources pooled by each participating computer. The pooling of the storage equipment (i.e., users’ local resources, made available by each of them) is necessary for the system to work; however, it is not meant to imply an intrusion in the stored content itself, which remains protected by the local safeguard of the password and the encryption of content. The collection of information, the developers of Drizzle affirm, has the purpose of automatically computing the storage space made available by each user – and, as we have analysed elsewhere (Musiani, 2013b), of establishing the extent to which each user can reclaim her place in the ‘P2P cloud’, an equivalent storage space in the network of participating users.

Conclusions

The development of Drizzle’s ‘peer-to-peer cloud’ allows to observe how changes in the architectural design of networked services affect data circulation, storage and privacy – and in doing so, reconfigure the articulation of the ‘locality’ and the ‘centrality’ in the network (Akrich, 1989: 39), suggesting a model of decentralised governance “by architectural design” for the service.

Ultimately, decentralising the cloud leads to a reformulation and ‘re-balancing’ of the relationship between the user and the service provider. The local, client-side encryption of data first, and its fragmentation afterwards – both operations conducted within the P2P client installed by the user, and entirely taking place on his terminal – are proposed by Drizzle as evidence that the firm, in its own words, “does not even have the technical means” to betray the trust of users.

In particular, this conception of privacy by design takes shape around the password, that remains locally stored in the user’s P2P client and unknown to the service provider. In doing so, it becomes a form of disengagement of the service provider with respect to security issues, its ‘auto-release’ from responsibility: a detail whose importance may seem small at first, but eventually leads to changes in the forms of technical solidarity (Dodier, 1995) established between users and service provider.

For the purpose of this article, I have focused in particular on aspects such as the strengthening of privacy by design and the increase in responsibility attributed to the user, arguably among the “positive” aspects of a peer-to-peer cloud. However, it should be pointed out that an important part of the decentralisation choice made by the Drizzle team has involved assessing its possible downsides: reliability and redundancy of data, slow downloading performances, soundness of the encryption mechanism, and – no less important – the perception of these issues by users. A heated discussion among developers, and between developers and some pioneer users, also occurred on the topic of the ‘legality’ of the system, especially in jurisdictions such as that of the United States. All of these are complex issues and most of them could not be accounted for here – it has been done in a much more detailed manner elsewhere (Musiani, 2013: 123-173), by analysing, with tools derived from the field of science and technology studies (STS), a number of socio-technical controversies related to the development of the platform. However, the privacy-related dynamics provided here are a few of the several possible ways to flesh out the extent to which changes in network architecture are, indeed, changes in network governance.

The example of Drizzle has illustrated in practice the implications of ‘architectures as governance’ we had introduced in the previous article: the repartition of competences and responsibilities between service providers, content producers, users and network operators; the articulation between the individual and the collective; the shaping of user rights and ‘community’ norms; the definition of ‘contributor’ in internet-based services. In light of Edward Snowden’s leaks about certain surveillance practices by the US National Security Agency, the potential of architectural choices – choices that would make the internet less centralised and more distributed – as a means of de facto privacy advocacy and promotion of decentralised governance has never been more evident. The goal, as The New Yorker recently reported, “isn’t to end surveillance, but to make it harder to do en masse” (Kopstein, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Aigrain, P. (2011). “Another Narrative. Addressing Research Challenges and Other Open Issues session.” PARADISO Conference, Brussels, 7–9 Sept. 2011.

Akrich, M. (1989). “De la position relative des localités. Systèmes électriques et réseaux socio-politiques.” Cahiers du Centre d’Études pour l’Emploi, 32 : 117-166.

Boyd, D. (2008). “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence, 14 (1).

Boyd, D. & Ellison, N. (2007). “Social Network Sites: Definition, History, and Scholarship.” Journal of Computer-Mediated Communication, 13 (1).

Callon, M., Lascoumes, P. & Barthe, Y. (2001). Agir dans un monde incertain. Essai sur la démocratie technique, Paris: Seuil.

Cavoukian, A. (eds., 2010). Special Issue: Privacy by Design: The Next Generation in the Evolution of Privacy. Identity in the Information Society, 3(2).

Dodier, N. (1995). Les Hommes et les Machines. La conscience collective dans les sociétés technicisées. Paris: Métailié.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Guerrini, Y. (2010). “Wuala : le P2P comme solution de stockage.” http://www.presence-pc.com/actualite/Wuala-stockage-cloud-P2P-39035/#xtor=RSS-11

Kopstein, J. (2013). “The mission to de-centralize the Internet.” The New Yorker, 13 December 2013, http://www.newyorker.com/online/blogs/elements/2013/12/the-mission-to-decentralize-the-internet.html

Le Fessant, F. (2009). “Les réseaux sociaux au secours des réseaux pair-à-pair.” Défense nationale et sécurité collective, 3 : 29-35.

Moglen, E. (2010). “Freedom in the Cloud: Software Freedom, Privacy and Security for Web 2.0 and Cloud Computing.” Keynote, ISOC Meeting, New York Branch, 5 February 2010.

Mowbray, M. (2009). “The Fog over the Grimpen Mire: Cloud Computing and the Law.” SCRIPTed, 6(1): 132-146.

Musiani, F. (2013a). “Network architecture as internet governance.” Internet Policy Review, 24 October 2013, http://policyreview.info/articles/analysis/network-architecture-internet-governance

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris : Presses des Mines.

Musiani, F. (2012). “Caring About the Plumbing: On the Importance of Architectures in Social Studies of (Peer-to-Peer) Technology.” Journal of Peer Production, 1.

Musiani, F. (2010). “Ménager le droit à la vie privée, entre anonymat et connaissance de l’identité: les débuts des réseaux sociaux en pair-à-pair.”  Terminal, 105: 107-116.

Musiani, F. (2010b). “When Social Links Are Network Links: the Dawn of Peer-to-Peer Social Networks and Its Implications for Privacy.” Observatorio, 4(3), 185-207.

Schaar, P. (2010). “Privacy by Design.” Identity in the Information Society, 3(2): 267-274.

Schollmeier, R. (2001). “A definition of peer-to-peer networking for the classification of peer-to-peer architectures and applications.” Proceedings of the First International Conference on Peer-to-Peer Computing, 27–29.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

Taylor, I. & Harrison, A. (2009). From P2P to Web Services and Grids: Evolving Distributed Communities. Second and Expanded Edition. London: Springer-Verlag.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press

Vinck, D. (Ed., 2003). Everyday Engineering. An Ethnography of Design and Innovation. Cambridge, MA: The MIT Press.

Footnotes

1. The name is fictitious (‘light rain’) and recalls the fragmentation and the distribution of data in the system’s storage mechanism. The names of the developers are pseudonyms, as well. I have no direct interest in Drizzle – I use it as a case study of a possible ‘decentralisation of the cloud’.

2. Unless otherwise noted, citations are derived from in-depth interviews with the developers of Drizzle, conducted within a period of online and ‘live’ ethnography of Drizzle’s development, design and innovation process (see Vinck, 2003) between 2010 and 2011.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

“La mission de DÉ-centraliser l’Internet” – The New Yorker

http://www.newyorker.com/online/blogs/elements/internet-290.jpg / Illustration by Maximilian Bode

 Un article très intéressant paru en décembre dernier sur le blog “Elements” du magazine bi-hebdomadaire américain The New Yorker (le 13 décembre 2013, en anglais seulement), revient sur le “projet”, qu’ils qualifient de “mission” dans le titre, de décentraliser l’Internet… Vaste programme s’il en est…

The Mission to Decentralize the Internet

L’article rédigé par Joshua Kopstein présente à la fois un historique de la décentralisation (ou plutôt de la non-décentralisation) de l’Internet mais également une histoire des non succès de plusieurs services décentralisés qui portaient ou portent toujours en eux de façon intrinsèque, à l’intérieur de leurs codes et de leurs design les, les “valeurs” d’un Internet décentralisé.

L’auteur s’attarde plus longuement sur l’intérêt de trois services innovants et décentralisés: Bitmessage, Bitcoin, Mailpile et ArkOS. Il revient aussi sur les relatifs échecs d’autres services web innovants et décentralisés (Freedom box et le réseau social Diaspora).

La lecture est édifiante et aisée pour les nons-spécialistes, elle présente rapidement certains éléments qui nous occupent au sein de ce projet de recherche. Bonne lecture:

http://www.newyorker.com/online/blogs/elements/2013/12/the-mission-to-decentralize-the-internet.html

extraits:

[…] Solutions like these follow a path different from Mailpile and ArkOS. Their peer-to-peer architecture holds the potential for greatly improved privacy and security on the Internet. But existing apart from commonly used protocols and standards can also preclude any possibility of widespread adoption. Still, Novak said, the transition to an Internet that relies more extensively on decentralized, P2P technology is “an absolutely essential development,” since it would make many attacks by malicious actors—criminals and intelligence agencies alike—impractical.

Though Snowden has raised the profile of privacy technology, it will be up to engineers and their allies to make that technology viable for the masses. “Decentralization must become a viable alternative,” said Cook, the ArkOS developer, “not just to give options to users that can self-host, but also to put pressure on the political and corporate institutions.”

“Discussions about innovation, resilience, open protocols, data ownership and the numerous surrounding issues,” said Redecentralize’s Bolychevsky, “need to become mainstream if we want the Internet to stay free, democratic, and engaging.”

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

Architectures de réseau et (comme) gouvernance d’Internet

Cet article, en anglais, est paru en octobre 2013 sur l’Internet Policy Review. Merci à Frédéric Dubois, Uta Meier-Hahn, Andrej Savin et Rikke Frank Joergensen pour leur relecture et leurs commentaires.

 

Network architecture as internet governance

The architecture of a networked system is its underlying technical structure, designed according to a “matrix of concepts” (Agre, 2003). It constitutes the logical and structural layout of a system, including transmission equipment, communication protocols, infrastructure, and connectivity between its components or nodes. This article introduces the idea of network architecture as internet governance1, and more specifically, it outlines the dialectic between centralised and distributed architectures, institutions and practices, and how they mutually affect each other.

Technical architectures, as argued by several authors discussed in this article, may be understood as alternative ways of influencing economic systems, sets of rules, communities of practice – indeed, as the very fabric of user behaviour and interaction. The status of every internet user as consumer, sharer, producer and possibly manager of digital content is informed by, and shapes in return, the technical structure and organisation of the services she has access to. It is in this sense that network architecture is internet governance: by changing the design of the networks subtending internet-based services, and the global internet itself, the politics of the network of networks are affected – the balance of rights between users and providers, the capacity of online communities to engage in open and direct interaction, the fair competition between actors of the internet market.

Architecture, “politics by other means”

“Study an information system and neglect its standards, wires, and settings, and you miss equally essential aspects of aesthetics, justice, and change,” once wrote science and technology studies (STS) scholar Susan Leigh Star (Star, 1999, p. 339). Indeed, the history of internet innovation suggests that the shaping of technical architectures populating the network of networks is, in the words of philosopher Bruno Latour, “politics by other means” (Latour, 1988, p. 229). The ways in which architecture is politics, protocols are law, code shapes rights (e.g., Lessig, 1999; DeNardis, 2009), are explored today by a number of different authors in relation to networked and online media; in particular, internet-related research has contributed to foster the debate on the intersection and overlap of governance by architecture with other forms of governance. This section, while not pretending to be exhaustive, discusses some key approaches to the question.

Interested in the relationship between architectures and the organisation of society, Terje Rasmussen (2003) has argued that there is a structural match between the development of the technical model of the internet (such as packet switching and distributed routing) and the transformation of the societies in which it operates. In this account, the technical infrastructure of the Internet suggests that ours is a distributed society, based on the ability to handle risk, rather than on central control. On the other hand, information studies scholar and internet pioneer Philip Agre suggests that “Decentralized institutions do not imply decentralized architectures, or vice versa. […] Architectures and institutions inevitably coevolve, and to the extent they can be designed, they should be designed together” (Agre, 2003, p. 42), but they are not “naturally” related.

IT law scholar Barbara van Schewick seeks to examine how changes, notably design choices, in internet architecture affect the economic environment for innovation, and evaluates the impact of these changes from the perspective of public policy (2010, p. 2). According to her, this is a first step towards filling a gap in how scholarship understands innovators’ decisions and the economic environment for innovation. After many years of research on innovation processes, we understand how these are affected by changes in laws, norms, and prices; yet, we lack a similar understanding of how architecture and innovation impact each other, perhaps for the intrinsic appeal of architectures as purely technical systems (ibid., p. 2-3). Traditionally, she concludes, policy makers have used the law to bring about desired economic effects. Architecture de facto constitutes an alternative way of influencing economic systems, and as such, it is becoming another tool that actors can use to further their interests (ibid., p. 389).

The relationship between architecture and law-making for networked media has been an increasingly central interdisciplinary preoccupation since the late 1990s/early 2000s. Early uses of the metaphor “code is law” can be found in William Mitchell’s City of Bits (1995) and in Joel Reidenberg’s article on lex informatica, the formation of information policy rules through technology (1998). However, legal scholars Yochai Benkler and Lawrence Lessig have arguably been the “scene-setters” in this field, with their work on sharing as a paradigm of economic production in its own right (2004) and technical architecture as politics (1999), respectively. While the former argued for the rise of a “networked information economy” as a system of “production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means” (Benkler, 2006), the latter introduced technical architecture as one out of the four main (and interconnected) society regulators, the other three being law, market and norms. The application of this principle to the text of computer programmes led to what remains, perhaps, the most striking incarnation of the famous “code is law” label (Lessig, 1999).

Among the scholars that have since been inspired by this line of inquiry, Niva Elkin-Koren is especially relevant. In her work (e.g., 2006, 2012), architecture is understood as a dynamic parameter in the reciprocal influences of law and technology design, in the field of information and communication systems. The interrelationship between law and technology often focuses on one single aspect, the challenges that emerging technologies pose to the existing legal regime, thereby creating a need for further legal reform; however, the author argues, juridical measures involving technology both as a target of regulation and as a means of enforcement should take into account that the law does not merely respond to new technologies, but also shapes them and may affect their design (Elkin-Koren, 2006).

The work of Tim Wu adds layers to the conceptualisation of code’s relationship with law, moving from Lessig’s concept that computer code can substitute for law or other forms of regulation, to code as an anti-regulatory mechanism tool that certain groups will use to their advantage to minimise the costs of law – the possibility of “using code design as an alternative mechanism of interest group behavior” (Wu, 2003).

Architecture and the future(s) of the internet

The current trajectories of innovation for the internet are making it increasingly evident by the day: the evolutions (and in-volutions) of the network of networks are likely to depend in the medium-to-long term on the topology and the organisational/technical model of internet-based applications, as well as on the infrastructure underlying them (Aigrain, 2011).

This is illustrated by what has been this author’s main research focus over the past few years: the development of internet-based services – search engines, storage platforms, video streaming applications – based on decentralised network architectures (Musiani, 2013b).

The concept of decentralisation is somehow shaped and inscribed into the very beginnings of the internet – notably in the organisation and circulation of data packets – but its current topology integrates this structuring principle only in very limited ways (Minar & Hedlund, 2001). The limits of the concentrated and centralised urbanism of the internet, which has been predominant since the beginning of its commercial era and its appropriation by the masses, are sometimes highlighted by the same phenomena that has contributed to its widespread success, as best illustrated by social media (Schafer, Le Crosnier & Musiani, 2011). Examples of incidents caused by “excessive concentration” are, for example, the global consequences of the Pakistani YouTube re-routing in 2008 or the repeated failures of Twitter infrastructure (e.g., in 2012). These incidents have put into the spotlight some of the possible limits of the concentration model: excessive control, technical and/or legal, by a single commercial entity; the opaqueness of the modalities of this control vis-à-vis the users; the vulnerability to single-point failures of centralised architectures.

While internet users have become, at least potentially, not only consumers but also distributors, sharers and producers of digital content, the network of networks is structured in such a way that large quantities of data are centralised and compressed within large data centers and server farms. At the same time, such data is most suited to a rapid re-diffusion and re-sharing in multiple locations of a network that has now reached an unprecedented level of globalisation. The current organisation of internet-based services and the structure of the network that enables their delivery – with its mandatory passage points, places of storage and trade, required intersections – raises many questions, in terms of the optimised utilisation of resources, the fluidity, rapidity and effectiveness of electronic exchanges, the security of exchanges, the stability of the network.

Beyond technology, these questions are deeply social and political, and affect the “ramifications of possibles” (Gai, 2007) the internet is currently facing for its close future. Resorting to decentralised architectures and distributed organisational forms, constitutes a different way to address some issues of management of the network, in a perspective of effectiveness, answer to vulnerabilities, digital “sustainable development” (better resource management), and of maximisation of the Internet’s value for society.

Architectures shaping user rights: decentralisation and privacy by design

Systems based on distributed, decentralised, peer-to-peer (P2P) architectures seek their place today in an IT landscape that is mostly one of concentration and removal from users’ machines. From the viewpoint of informational data, personal data and exchanged content, this implies that sharing, regrouping and stocking those data in the most popular, and widespread internet services of today means promoting a model in which traffic is re-directed towards an ensemble of machines, placed under the exclusive and direct control of the service provider. Thus, exchanges between users are made by “copying” data that one wishes to share on one or more external terminals, or by giving these external machines the permission to index this information. The ways in which data circulates, is stored and written in these machines is often uncertain; moreover, the rights that the service provider acquires on such data are often excessive with respect to those maintained by the end user – in such a way that is often opaque for users themselves2.

When the operations of data treatment and handling are conducted, partially or totally, on users’ terminals directly linked together, this choice of network architecture contributes to building specific definitions of privacy protection. It modifies the ways in which the control on informational data, and the responsibility of their protection, are spread out to the users, the service providers and the developers who have created the service.

Three cases of internet services based on a decentralised network architecture – a search engine, a storage platform and a video streaming software, studied between 2009 and 2011 – have shown how a definition of privacy “by design,” more specifically by architectural design, takes shape in internet services (Musiani, 2013b). With this alternative, “techno-legal” way of defining privacy, a central role is attributed to the constraints and the opportunities of privacy protection that are inscribed into the technical model chosen by developers (Schaar, 2010).

Faroo, a P2P search engine developed first in Germany, then in the United Kingdom, displays a “six-levels” distribution model that must prevent the traceability of queries by a central entity; this model is supposed to preserve personal data within the user’s own terminal and the P2P client installed on it – unless they are encrypted on that very terminal before leaving it. This feature also allows the developers to work towards reducing the tension – which is a priori very difficult to eliminate – between the confidentiality of personal information and the personalisation of search queries, the latter being the “added value” that social dynamics add to the search engine and, which is based on the very collection of this personal information.

The case of Tribler, a P2P video streaming tool first developed at the Technical University of Delft (The Netherlands), is another occasion to follow this tension, as the logic underlying the system is that the history of downloads made by a user are shared by default with other users so as to nourish the software’s “recommendation” algorithm. The solution envisaged by the developers has, once again, to do with an idea of “privacy by architectural design”, as it builds on the decentralised and distributed model to mitigate, in the eyes of users, the impression of exposure and revelation of themselves that the system’s social features may provoke: not only can the feature be disabled, but it only sends the download history to other users – it doesn’t keep the information on any server controlled by the service.

Finally, Wuala 3, a (formerly) distributed storage platform developed in Switzerland, displayed similar attempts to protect user privacy via architecture. The heart of this service was the user’s terminal, where, thanks to a dedicated P2P client, the operations of encryption and fragmentation of stored data could take place. These two operations, conducted before any other (e.g., sharing, downloading or circulating data in the network), were meant, in the vision of Wuala’s developers, as evidence given to the users that the service provider, regardless of its intentions, did not even possess the technical means to break user trust in the system.

While developers, across all three case studies, consider that a more articulate protection of privacy is one of the core comparative advantages of their systems (and they “sell” it as such), users wonder, in turn, about the implications of a decentralised architecture for the protection of their data. What does the fact of making available to the whole P2P network a part of one’s own computing resources imply, for the “invisible” data collected there? In the cases of Faroo and Wuala – where the P2P model merges, in a peculiar way, with a proprietary software logic, this question is the occasion to make explicit the difficult articulation between the decentralising philosophy subtending the systems, and a closed source code. Pioneer users – for the most part, users-innovators or users-developers themselves – see the closed code as a lack of transparency, even a lack of respect, that prevents them from delving into this aspect with the tools they have available. It is good to have privacy by architecture, these users point out, but we need to have a direct knowledge of this technique on a case-by-case basis, to, eventually, allow for direct modifications of the architecture.

Decentralised models challenge “by architecture” the extent, the balance and the very definition of the rights obtained by service providers on users’ personal data, vis-à-vis the rights that users maintain on such data. With a trade-off: on the one hand, the user sees her privacy reinforced by the possibility of an augmented control on her data, and its handling by the P2P client. However, simultaneously and for the same reasons, her responsibility for the actions she undertakes within and by means of the application is increased proportionately, as the provider surrenders voluntarily some of his control over the data and content present on the service. The collective dimension of this responsibility is also emphasised, inasmuch as the infraction to the collective behaviour has not only individual but collective consequences- be it the storage of inappropriate content, the introduction of unreliable information or spam in a distributed search index, or a “selfish” management of the bandwidth shared by a P2P streaming system.

Conclusions: how architecture matters

“Arrangements of technical architecture have always inherently been arrangements of power,” writes STS scholar Laura DeNardis (2012): the technical architecture of networked systems does not only affect internet governance, but is internet governance. This governance by architecture, or “governance by design” (De Filippi, Dulong de Rosnay & Musiani, 2013), has important implications at a number of levels, of which the previous section has given but one example.

Changes in architectural design affect the repartition of competences and responsibilities between service providers, content producers, users and network operators. They affect forms of engagement and intéressement (Callon, 2006) in networked systems, of users first and foremost, but also of other actors concerned by the implementation and the operation of internet services. They shape the sustainability of the underlying economic models and the technical and legal approaches to digital content and personal data. They make visible, in various configurations, the forms of interaction between the local and the global, the patterns of articulation between the individual and the collective.

Changes in network architectures contribute to the shaping of user rights, of the ways to produce and enforce law, and are reconfigured in return. A number of legal issues, that go way beyond copyright (despite having often been reduced to this aspect, notably in the case of peer-to-peer systems), are raised by architectural configurations of internet services. To preserve the internet’s “social value,” it is important to achieve reliable forms of regulation – technical, political, or both – without impeding present and future innovation.

Changes in architecture do, finally, contribute to shift the boundary between public and private uses of the internet as a global facility: they are a crucial factor in defining intellectual property rights, the right to privacy of users/clients, or their rights of access to content. They contribute to define what is a contributor in internet-based services, in terms of computing resources required for operating the system, and of content.

In the end, technical architecture appears as one of the strongest, if not the strongest structuring element of internet governance: what is shaped into architecture and infrastructure can seldom be undone by institutional negotiation and dialogue alone, and institutions find it increasingly complicated to keep up with “creative” governance by architecture and by infrastructure4. In this sense, future evolutions of internet governance as a field would do well to take into account Michel van Eeten and Milton Mueller’s suggestion to expand and include innovative areas such as the economics of cybercrime and cyber security, network neutrality, content filtering and regulation, copyright enforcement, and interconnection arrangements among ISPs (van Eeten & Mueller, 2013).

In the digital world, it is possible to design in detail the architecture of the world users interact with – and as a consequence, it is possible to design the architecture of our global communication infrastructure in order to promote specific types of interactions over others (De Filippi et al., 2013). With important consequences for the ways in which the future internet will be governed, and for the extent to which its users will be not only customers, but citizens.

Footnotes

1. Internet governance (IG) today is a lively, emerging field, and its definition relentlessly contested by different groups across political and ideological lines. A “working definition” of IG has been provided in the past, after the United Nations-initiated World Summit on the Information Society (WSIS), by the Working Group on Internet Governance – a definition that has reached wide consensus because of its inclusiveness, but is perhaps too broad to be useful for drawing more precisely the boundaries of the field (Malcolm, 2008): “Internet governance is the development and application by governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet” (WGIG, 2005). This broad definition implies the involvement of a plurality of actors, and the possibility for them to deploy a plurality of governance mechanisms. IG has been described as a mix of technical coordination, standards, and policies (e.g., Malcolm, 2008 and Mueller, 2010). See also (DeNardis, 2013) and (Musiani, 2013a).

2. See this discussion of the terms of use of several social sites, among which Facebook and Instagram: http://www.nyccounsel.com/business-blogs-websites/who-owns-photos-and-videos-posted-on-facebook-or-twitter/

3. The decentralised mechanism subtending the Wuala system, a trade between local storage space and space in a “P2P storage cloud” spread out to the users, was discontinued in September 2011.

4. An example is the Domain Name System and its co-optations. See (DeNardis, 2012) and (Musiani, 2013).

References

Agre, P. (2003). “Peer-to-Peer and the Promise of Internet Equality.” Communications of the ACM, 46 (2): 39-42.

Aigrain, P. (2010). “Declouding Freedom: Reclaiming Servers, Services and Data.” In 2020 FLOSS Roadmap (2010 Version/3rd Edition), https://flossroadmap.co-ment.com/text/NUFVxf6wwK2/view/

Benkler, Y. (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Benkler, Y. (2004). “Sharing Nicely: On Shareable Goods and the Emergence of Sharing as a Modality of Economic Production.” The Yale Law Journal, 114 (2), 273-358.

Callon, M. (2006). “Sociologie de l’acteur-réseau.” In Akrich, M., Callon, M. & Latour, B. Sociologie de la traduction. Textes fondateurs. Paris : Presses des Mines, 267-276.

De Filippi, P., M. Dulong de Rosnay & F. Musiani (2013). “Peer production online communities, distributed architectures and governance by design.” Communication presented at the Fourth Transforming Audiences Conference, September 3, 2013, University of Westminster, London.

DeNardis, L. (2013). “The Emerging Field of Internet Governance”, in W. Dutton (ed.) Oxford Handbook of Internet Studies. Oxford: Oxford University Press.

DeNardis, L. (2012). “The Turn to Infrastructure for Internet Governance”, Concurring Opinions, 2012, http://www.concurringopinions.com/archives/2012/04/the-turn-to-infrastructure-for-internet-governance.html

DeNardis, L. (2009). Protocol Politics. The Globalization of Internet Governance. Cambridge, MA: The MIT Press.

Elkin-Koren, N. (2006). “Making Technology Visible: Liability of Internet Service Providers for Peer-to-Peer Traffic.” New York University Journal of Legislation & Public Policy, 9 (15), 15-76.

Elkin-Koren, N. (2012). “Governing Access to User-Generated Content: The Changing Nature of Private Ordering in Digital Networks.” In Brousseau, E., Marzouki, M., Méadel, C. (eds.), Governance, Regulations and Powers on the Internet, Cambridge: Cambridge University Press.

Gai, A.-T. (2007). “Web 3.0: une autre branche pour l’arbre des possibles.” Transnets, http://pisani.blog.lemonde.fr/2007/02/17/web-30-une-autre-branche-pour-larbre-des-possibles/

Latour, B. (1988). The Pasteurization of France. Cambridge, MA: Harvard University Press.

Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.

Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA : Terminus Press.

Minar, N. & Hedlund, M. (2001). “A network of peers – Peer-to-peer models through the history of the Internet.” In A. Oram (Ed.), Peer-to-peer: Harnessing the Power of Disruptive Technologies, 9-20. Sebastopol, CA: O’Reilly.

Mitchell, W. J. (1996). City of Bits. Space, Place and the Infobahn. Cambridge, MA: The MIT Press.

Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. Cambridge, MA: The MIT Press.

Musiani, F. (2013a). “A Decentralized Domain Name System? User-Controlled Infrastructure as Alternative Internet Governance”. Presented at the 8th Media In Transition (MiT8) conference, May 3-5, 2013, Massachusetts Institute of Technology, Cambridge, MA. Available as draft at http://web.mit.edu/comm-forum/mit8/papers/Musiani_DecentralizedDNS_MiT8Paper.pdf

Musiani, F. (2013b). Nains sans géants. Architecture décentralisée et services Internet. Paris, Presses des Mines.

Rasmussen, T. (2003). “On distributed society: The history of the Internet as a guide to a sociological understanding of communication and society,” In G. Liestøl, A. Morrison & T. Rasmussen (ed.),  Digital Media revisited : theoretical and conceptual innovation in digital domains, Cambridge, MA: The MIT Press.

Reidenberg, J. R. (1998). “Lex Informatica: The Formulation of Internet Policy Rules Through Technology.” Texas Law Review, 76 (3).

Schafer, V., H. Le Crosnier & F. Musiani (2011). La neutralité de l’Internet, un enjeu de communication. Paris: CNRS Editions/Les Essentiels d’Hermès.

Star, S. L. (1999). “The Ethnography of Infrastructure.” American Behavioral Scientist, 43 (3): 377-391.

van Eeten, M. & M. Mueller (2009). “Where Is the Governance in Internet Governance?” New Media & Society, 15 (5): 720-736.

van Schewick, B. (2010). Internet Architecture and Innovation. Cambridge, MA: The MIT Press.

Working Group on Internet Governance (2005). Report of the Working Group on Internet Governance, Château de Bossey, June 2005, http://www.wgig.org/docs/WGIGREPORT.pdf

Wu, T. (2003). “When Code Isn’t Law.” Virginia Law Review, 89.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Vers une société de “perclouds”?

Un “nuage” personnel, relié en architecture P2P? Marco, hacker et écrivain, nous explique sa vision ici:

http://stop.zona-m.net/2013/10/the-real-problem-that-the-percloud-wants-to-solve-and-why-its-still-necessary/

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

McAfee songe au P2P pour contourner la surveillance NSA?

Le créateur de la célèbre entreprise d’anti-virus, McAfee, songerait au peer-to-peer pour contourner la surveillance NSA. Des dispositifs appelés D-central, mobiles et transportables (ainsi que relativement économiques) permettront de créer des réseaux privés et encryptés.

Lire plus de détails ici.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Un regard critique sur les architectures décentralisées

Un article de 2012, par Arvind Narayanan, Solon Barocas, Vincent Toubiana, Helen Nissembaum and Dan Boneh, librement disponible en ligne sur la plateforme ArXiv, porte un “regard critique” sur les architectures de réseau décentralisées dans des contextes d’application qui impliquent un traitement des données personnelles.

Le problème central auquel se confronte le papier est la distance entre la promesse des architectures décentralisées, face à la centralisation progressive des fournisseurs de service, et le  manque d’application à large échelle, sauf exceptions, de ces architectures. En parallèle aux avantages, le papier discute les inconvénients de la décentralisation, qui restent, d’après les auteurs, souvent cachés sous la promesse d’une plus grande liberté et protection.

« …for all these efforts, decentralized personal data architectures have seen little adoption. This position paper attempts to account for these failures, challenging the accepted wisdom in the web community on the feasibility and desirability of these approaches. »

« for the most part decentralized social networking appears not to have anticipated the success of mainstream commercial, centralized social networks, but rather developed as a response to it. »

« we present some underappreciated drawbacks of decentralized architectures. Not all of these apply to all types of systems, nor is any of them individually a decisive factor. But collectively they may help explain why decentralization faces a steep road ahead, and why even if adopted, decentralization will not necessarily provide all the benefits that its proponents believe will automatically flow from it. »

« We hope to kick off a more tempered discussion of the future of personal data architectures in both scholarly and hobbyist/entrepreneurial circles, one that is informed by the lessons of history. There is much work to be done along these lines — application of economic theory can shed light on questions such as the relative strength of network effects in centralized vs. decentralized systems. Empirical methodology such as user and developer interviews would also be tremendously valuable. »

On est ravis d’apprendre qu’on est en train de contribuer à une démarche intéressante.  ;-)

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Réinventer l'”annuaire téléphonique” de l’Internet? Institutions, Industries, Infrastructures

Image 2Le 19 Avril 2013, Francesca Musiani, en sa qualité de Yahoo! Fellow à l’Institut d’études diplomatiques de la School of Foreign Service, Georgetown University (Washington, DC) a organisé une conférence intitulée «Réinventer l'”annuaire téléphonique” de l’Internet? Institutions, Industries, Infrastructures ». Depuis la création de l’Internet, l’utilisation de noms de domaine, adresses, protocoles et autres infrastructures sous-jacentes au “réseau des réseaux” comme instruments de pouvoir et de gouvernance a joué un rôle crucial dans le maintien de sa stabilité, face à toutes ses évolutions. Dans l’Internet d’aujourd’hui, ces outils sont de plus en plus mis à profit par des entités politiques à des fins différents de ceux pour lesquels ils ont été initialement conçus. Cette conférence, dont on présente un compte-rendu détaillé en anglais, a abordé plusieurs thèmes chers à ADAM dans son exploration des implications politiques, sociales et techniques du “turn to infrastructure” dans la gouvernance de l’Internet. Les participants à cette conférence se sont concentrés sur un aspect particulièrement controversé de l’infrastructure Internet: le système de noms de domaine (DNS), ou l'”annuaire téléphonique” de l’Internet. Une version PDF du rapport est disponible sur le site de l’Institut d’études diplomatiques.

Reinventing the Internet’s Phone Book? Institutions, Industry and Infrastructure

A Conference Account

Francesca Musiani (2012-13 Yahoo! Fellow in Residence, ISD, Georgetown University)

With the collaboration of Chris Haley & Allison Maranuk (2012-13 Yahoo! Junior Fellows, MSFS, Georgetown University)

Note to the Reader: This account is intended as a follow-up resource for conference participants, for individuals who expressed interest but were unable to attend the conference, and more broadly for people interested in Internet governance, particularly DNS governance, issues. While we have paid a great deal of attention in being as accurate as possible, in no case portions of this text should be considered as direct quotes from the speakers’ remarks. Thank you to all the speakers and moderators for sharing their insights, and to Chris and Allison for the diligent note-taking. I take full responsibility for whatever inaccuracy is left. FM

On April 19, 2013, the Institute for the Study of Diplomacy at Georgetown University’s School of Foreign Service hosted a conference entitled “Reinventing the Internet’s Phone Book? Institutions, Industry and Infrastructure”. Since the Internet’s foundation, the use of domain names, addresses, protocols, and other underlying infrastructures as instruments of power and governance has been crucial in maintaining stability throughout its evolution. In today’s Internet landscape, these tools are increasingly being leveraged by political entities for purposes other than those for which they were designed. This conference set out to explore the political, social, and technical implications of this tendency, by focusing on a particularly controversial aspect of Internet infrastructure: the Domain Name System (DNS), or the Internet’s “phone book.” Three organizations and institutions co-sponsored the event: the Yahoo! Fund on Communications Technology, International Values, and the Global Internet; American University’s School of International Service; and the Global Internet Governance Academic Network (GigaNet).

 

Internet governance by infrastructure: the case of the Domain Name System

Francesca Musiani, Yahoo! Fellow in Residence at the ISD for 2012-13 and the event’s host, first introduced the topic of the day’s discussion. This required, initially, to briefly touch upon the definition of Internet governance, which she described, based on the 2005 definition by the Working Group on Internet Governance, as the development and application, by relevant actors in their respective roles, of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. This definition, despite its inclusiveness, has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors. In particular, the role of governments is central and ambiguous, and other aspects of internet governance are controlled by transnational organizations. One should be careful about simplifying ideological extremes in discussing IG: the public is sometimes under the impression, fostered by media, that IG is entirely performed by a handful of institutions – which is not the case. All of this often leads to neglect or disregard what is, instead, a crucial aspect of Internet governance: there are a number of components of the Internet’s infrastructure and technical architecture in the design of which are embedded, to some extent, arrangements of governance. These are technologies and processes beneath the layer of content and inherently designed to keep the Internet operational: Internet Protocol addresses are an example, and there are many more, but the one the conference wishes to address is the Domain Name System, or DNS.

The DNS translates between alphanumeric domain names and their associated IP addresses necessary for routing packets of information over the Internet. For this reason, it is oftentimes called the Internet’s “phone book”. It is a wide database management system, arranged hierarchically but distributed globally, across countless servers. The Internet’s root name servers contain a master file known as the root zone file, listing the IP addresses and associated names of the official DNS servers for all top‐level domains (TLDs). The management of the DNS has always been a central task of Internet governance, and ICANN is ultimately responsible for managing the assignment of domain names (delegated through Internet registrars), and for controlling the root server system and the root zone file.

There have been a number of controversies in this area, involving institutional and international power struggles over DNS control, and issues of legitimacy, democracy, and jurisdiction. Notably, debates have addressed the historical ties between ICANN and the United States government in face of increasing internet globalization; this controversy continues to be a heated topic in Internet governance discussions. There are additional policy implications in the DNS: it was originally restricted to ASCII characters, precluding the possibility of domain names in many language scripts such as Arabic, Chinese or Russian. Internationalized domain names (IDNs) have now been introduced. Furthermore, in 2011, ICANN’s board voted to end most restrictions on the generic top-level domain names (gTLD) from the 22 currently available.Companies and organizations will now be able to choose essentially arbitrary top-level Internet domains, with implications for consumers’ relationships to brands and ways to find information on the Internet. Further DNS issues concern the relationship between domain names and freedom of expression, security, and trademark dispute resolution for domain names.

While this covers quite a lot of ground already, this conference aimed at taking one further step. In recent years, we witness a number of (more or less successful) attempts, by political and private entities, to co-opt infrastructures of internet governance for purposes other than the ones they were initially designed for. Not only is there governance of infrastructure, but governance is carried out by infrastructure… using infrastructure in “creative ways”, so to speak. As DeNardis (2011) explains: “Forces of globalization and technological change have diminished the capacity of sovereign nation states and media content producers to directly control information flows. This loss of control over content and the failure of laws and markets to regain this control have redirected political and economic battles into the realm of infrastructure.” Examples of how content mediation controversies have shifted into the realm of Internet governance infrastructure can be found, for example, in the intentional outages of basic telecommunications and Internet infrastructures, enacted by governments via private actors, whether via protocols, application blocking, or termination of access services. The government-initiated Internet outages in Egypt and Libya, in the face of revolution and uprisings, have illustrated this and may have set a dangerous precedent.

However, the domain name system is perhaps, nowadays, the best illustration of this “governance by infrastructure” tendency. Domain name seizures that use the domain name system to redirect queries away from an entire web site, rather than just the infringing content, have been considered as a suitable means of intellectual property rights enforcement. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA). Governance by infrastructure enacted by private actors was also visible during the WikiLeaks saga, when Amazon and EveryDNS blocked Wikileaks’ web hosting and domain name resolution services. The conference addresses these controversies, with the aim of understanding the extent to which matters of Internet governance using infrastructure entail not only issues of economic freedom – but of Internet freedoms.

 

The DNS today: enforcement, security and mobilizations

The first panel, moderated by Derrick Cogburn, Associate Professor, School of International Service, American University, featured panelists Steve Crocker, CEO, Shinkuro, Inc. & Chair, ICANN Board, Matthew Schruers, CCIA & Adjunct Professor, Georgetown University, Scott McCormick, Consultant, McCormick ICT International, and Luke Pelican, Consultant, Ammori Group.

Dr. Steve Crocker, an internet pioneer and author of the first Request for Comments of the Internet Engineering Task Force (IETF), has been involved in the development of internet since its startup in the late 60s and 70s. His opening remarks, he suggested, would probably be a counterpoint to the introductory talk and most of the day’s discussions.

It is interesting to see how attractive the idea of Internet governance has become to such diverse groups, and the range of issues it covers. It could be useful to ask again the question: what is it that has to be governed? There are three main sets of issues.

First of all, we all have a shared interest in the system. A threat to its security is bad for the public as a whole, and maintaining operation of it is important to everyone: the system has to continue to work. Contrary to popular belief, many threats are in fact not malicious, they are accidents or otherwise caused by the overloading of the system or some of its components, and its disruption via single or multiple points of failure. Secondly, some coordination of scarce resources is needed; however, the extent to which there are scarce resources on the Net is, in fact, debatable. The Internet Corporation for Assigned Names and Numbers (ICANN) is responsible for maintaining unique identifiers in the domain name space. Originally there were 4 domains, and eventually, it was decided to attach human-friendly names to those numbers. In the beginning, the majority of the connections were in the US, with only a  few international connections; since the beginning, however, there was the idea of a system as distributed as possible and over time, pressures increased to expand it. Originally, there were about 4 billion IP addresses available. In the DNS’s early days, it was thought that this number would last forever – now, the IPv4 system is close to depletion. We will now see a rise of the IPv6 system, which will take some transition, and in this transition period, there may be some issues as IPv4 and IPv6 are not born interoperable. Thirdly, some governance is needed for the suppression of undesired behavior, from impolite speech to identify theft, from espionage to extortion and of course, child pornography. This is a controversial area, of course, because “one man’s freedom is another man’s pain”.

As the Internet began to grow, there was some conversation about who would be in charge of all this. First, Jon Postel single-handedly managed the system, simply updating the hosts.txt directory when needed. Of course, this quickly became too much, so ICANN was created and incorporated as a non-profit in California. It has relations with the US government due to the renewal of its contract with the Commerce Department to perform the Internet Assigned Numbers Authority (IANA) functions. Today, Internet governance brings in a lot of people who want to use the Internet as a pawn in their own objectives, but are not acting in the internet’s best interest. What has made the Internet blossom is to make it as unrestricted as possible (in stark contrast to the telephone system), leaving innovation at the edges, and the same principle applies to the DNS. As there is no technical reason to either change the structure or to prohibit additional domain name systems from being created, ICANN’s last “big decision” has been to lift most restrictions on gTLDs and opening up an application process.

Law scholar Matthew Schruers centered his remarks on the relationship between copyright and Internet architecture. As the internet expands, the scope of government power is far more limited. Governments found it easier to regulate information intermediaries, rather than the source itself. The scope of the power of the government to regulate the Internet is more about its ability to regulate the intermediaries rather than the specific sources of information. There are four regulation forces, or tools: law, norms, architecture and markets. We are increasingly witnessing attempts to regulate architecture in order to regulate something else. SOPA and PIPA were the extension of Congress strategies to regulate intermediaries, and this included the DNS. Within these debates, and given the very different levels of technical competence on the Hill, the phone book model became really important, because it could clearly convey the idea that these laws were like removing pages from the phone book. As we will see later, SOPA and PIPA did not come into force because of widespread public outcry. These law projects would have allowed law enforcement agencies to seize domain names as if they were physical property; by removing the domain name, users would still be able to get to the website by using the IP address, but wouldn’t be able to get to it by typing in the alphanumeric address – and for most people this is a big enough obstacle.

The way in which architecture regulates is not the same way in which law regulates. Norms for a particular type of conduct are very fluid, in terms of the community and how it applies; laws enforce themselves in a leaky way (especially IP law), and they need to be enforced by a judicial system. Architectural enforcement is, in this sense, “perfect”: with laws, compliance is voluntary, we comply with them by choice; while with architecture-based enforcement, compliance is coerced, there is no choice. Finally, law is inherently nuanced, and there are exceptions to it; architecture is absolute, it allows a possibility or it doesn’t, and there is no capacity for exceptions. The US Government is an example of this: recently, it used an intermediary, Go Daddy, to seize domain names in Spain; in Spain, this was lawful but in the US it was not. Another example is the Dajaz1 website, which sometimes let out pre-releases of songs (often leaked to the website by the music promoters), so the RIAA urged the US government to seize the domain via the Utah-based Fast Domain, Inc. It turned out that the legal basis, in both of these cases, was not sound, and the sites were reinstated, but in the end, free speech was suppressed a priori for two years.

Luke Pelican introduced the SOPA/PIPA controversy and the role of civil society in successfully putting a stop to the legislation. Both bills (the acronyms stand respectively for Stop Online Piracy Act and PROTECT IP, itself an acronym of Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act) were aimed at combating digital piracy, and presented to the public as legislation that would help protect US jobs and industries. Critics, on the other hand, said these bills undermined Internet freedom and threatened free speech, and could actually harm the US economy, as startup companies dependent on user-created content were more likely to be sued under the legislation.

Further complicating the controversy were challenges in explaining some of the technical problems to the general public. Companies, public interest groups, and technical experts reviewed the technical provisions in the bills and raised their concerns publicly, concerns which other groups turned into meaningful action. Fight for the Future, an activist group, led a campaign against a related copyright bill in October 2011, arguing that if the bill became a law, then people like Justin Bieber could have been sent to jail instead of becoming musical successes. The “Bieber in Jail” campaign received a lot of attention from various media groups and shows like the Colbert Report. During American Censorship Day, a protest of SOPA and PIPA held on November 16, 2011, several advocacy groups framed the issue of these bills as the imposition of an American censorship system rather than about the problem of piracy. The blogging platform Tumblr auto-censored its site as part of this awareness campaign and encouraged their users to contact Congress. Overall, the American Censorship Day protests resulted in 84,000 phone calls and over a million emails to Congress, one of the biggest public outcries over an Internet-related issue. It seemed to be a forgone conclusion that these bills would pass, so, on January 18, 2012, over 115,000 websites joined in a massive web “blackout” as part of a concerted effort to stop the legislation. DNS blocking provisions were included both in SOPA and in PIPA; eventually, the sponsor of the SOPA said he would remove these provisions, after talking with technical experts. The SOPA/PIPA case is likely to have encouraged more people, including lawmakers and regulators, to learn some of the technical aspects of the Internet’s daily workings, and have a better understanding of how this facility we use daily works in practice. And this is a positive outcome that exceeds the stalling of the bill.

 

New actors in Internet governance: privatization, infrastructure, alternatives

The afternoon panel, moderated by Nanette Levinson, Associate Professor, School of International Service, American University, broadened the discussion to evolutions in Internet governance and actor participation in it, from the private sector’s increasingly crucial role in content regulation, and in placing restrictions on freedom of expression, to peer production collectives proposing “creative disruption” as a response to infrastructure-based enforcement. The discussion featured panelists Fiona Alexander, Associate Administrator, Office of International Affairs, National Telecommunications and Information Administration; Matthew Hindman, Associate Professor, George Washington University; Francesca Musiani, Yahoo! Fellow, ISD, and Shane Tews, Chief Policy Officer, 463 Communications.

Francesca Musiani, Yahoo! Fellow at the ISD, presented preliminary findings from her current research project. She argued that, in a discussion about new actors and changing balances in IG, it was worth including a discussion about the people who think about “second-degree” governance by infrastructure: people who, instead of addressing the DNS in its current form, look for ways to build an alternative one.

Between 2010 and 2011, the WikiLeaks case prompts a new wave of discussions about a “new competing root-server”, able to rival ICANN. An alternative domain name registry is envisaged, a decentralized, peer-to-peer (P2P) system in which volunteer users would each run a portion of the DNS on their own computer, so that any domain made temporarily inaccessible may still be accessible on the alternative registry. Instead of simply adding a number of DNS options to the ones already accepted and administrated by ICANN and its registrars, this project would try to supersede ICANN in favor of a distributed, user infrastructure-based model. There are a number of issues and open questions with this project. There are two fundamental operations that are served by the DNS: name registration and name resolution, that are usually though of jointly, but one could foresee replacing just one of them. The function that a P2P DNS project would be tackling (alternative root? .p2p top-level domain?) needs to be stabilized. P2P architecture does not allow for simultaneous optimization of all needed features, but calls for compromises. Finally, even if the alternative takes hold, a long co-existence should be expected.

There are social and political conditions of feasibility for radical alternatives such as P2P DNS. In the case that any of the decentralized DNS projects matures to the stage of relevant user appropriation, the crucial issue may become trust of users in other users: users will need to rely on other peers in the network to direct them, and it is one thing to trust OpenDNS, Google etc. but completely another thing to do the same with a random computer. And finally, it is a matter of governance: the original questions that cause P2P DNS proposals to proliferate are deeply political: they are about control, freedom, and censorship. Technical solutions to controversial issues that have a political component to them should, at some point, be accompanied by evolutions of institutions, lest the governance of the Internet be reduced to a war of surveillance and counter-surveillance technologies, of infrastructure cooptation and counter-cooptation.

 

The “Turn to Infrastructure” and the future of IG

In the conference’s final keynote, Laura DeNardis, Associate Professor in the School of Communication at American University, tied together the themes discussed during the day, placing particular emphasis on recently raised concerns about the future of Internet governance, and on the need to preserve interoperability. Most of these issues are discussed in her book “The Global War for Internet Governance”, forthcoming with Yale University Press. The book describes the different layers of how internet governance works; outlines the current state of global debates, and the balance of global political and economic powers related to Internet governance, civil liberties and national security, innovation policies and the preservation of the decentralized nature of the Internet.

Internet governance functions, even though technologically complex and often outside of public view, are becoming political proxies for global political struggles and conflicting values. In this context, the DNS is one important (and relatively well-working) component of a broader Internet global ecosystem. The very definition of Internet governance is contested but it is generally referred to as the design and administration of the technologies necessary to keep the Internet operational, as well as the debates around those technologies, such as critical Internet resources, standards, and protocols needed to operate the network. There is an intersection between Internet architecture and content mediation; people’s Internet access is cut off (or access restrictions are discussed) to control content sharing and communication. The evolutions in Internet connectivity, a highly private area mostly under the control of Internet companies and their agreements, raise a number of concerns in terms of stability and censorship. The conference has addressed three main themes.

First, “arrangements of technical architecture are arrangements of power” And “Infrastructure is never just infrastructure,” and is also about some understanding of complex technical systems such as the DNS; large-scale debates and mobilization, such as  the SOPA and PIPA debates; the technical complexity is often paralleled by the complexity of institutions; political structures are often embedded technological hybrids. As science and technology studies Susan Leigh Star once said, we need to invert the common sense notion of infrastructure, taking what has often been seen as ‘boring’ and behind the scenes, bringing it to the floor. Internet governance scholars such as the organizers of this conference, all involved in GigaNet, embrace this perspective in relation to Internet governance.

Second, information technology infrastructure is becoming a proxy for power control, a move that is bound to have a number of unintended consequences. Corporate media producers have lost power over the monetization of their content and are looking to infrastructure as a means of reacquiring that power; some global choke points, despite the Internet’s overall decentralization, do exist and the extent to which they are subject to “stress fractures” deserves close consideration. While these control points – some virtual, some material, most often a hybrid of both – do exist, there is often not enough public understanding of how technology works.

Third, the multi-stakeholder discussion often reveals its limits, mostly in contexts of privatization of internet governance. Much Internet governance is being done through new forms, not governments; examples are regional internet registries and private telecom companies managing the Internet’s backbone. Privatized areas are enacting policies and we are often moving from governments to private sector as Internet governance’s crucial actor. From “delegated censorship” to “delegated law enforcement”, the spotlight is on private entities.

These three themes raise the question of what are the challenges to the future of Internet governance, and therefore, to Internet freedom. First, there needs to be a focus on issues of interoperability, which is easy to take for granted.  In many ways, we have more connectivity than ever. But there is not interoperability between social media platforms, Internet voice software, or cloud computing services in the same way there is in email or web services. For example, Skype, while an excellent application, is based in part on proprietary approaches. There is a shift from an open, unified web in which the publication of open standards has helped foster innovation and compatibility among products to an environment that de-prioritizes interoperability and places constraints on interconnection. Constraints on interoperability are constraints on innovation itself.

The DNS is a foundational technical system necessary for the Internet’s operation, handling billions of queries per day, and it is increasingly used for content blocking functions for which it was not designed. If DNS query resolution is not universally consistent, this may have serious implications for the universality and stability of the global Internet.

To conclude, the Internet is governed while being in a state of constant flux, and a very complex system; its governance entails issues of both private control and civil liberties; it requires technical design as well as new institutional reforms; this governance is not fixed, anymore than technical architecture is fixed. The consequences of changes to this system should be carefully examined as we move forward.

 

 

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Terranet “Be the network” (vidéo)

Nous avions évoqué le projet Terranet il y a quelques temps. Nous venons de découvrir leur nouveau clip de présentation qui explique la démarche et le fonctionnement de cette technologie. La vidéo est très intéressante et ses protagonistes prononcent des mots qui nous intéressent particulièrement (“local network”, “Be the network”, “existing infrastructures”, “less infrastructure”, “access”, “environnement”, “mesh gate”, “digital divide” etc.)…

Découvrez la vidéo:

Terranet.se

 

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

Le “Meshaging”: pourquoi le décentralisé?

Commotion possède un nouveau site Internet (commotionwireless.net) depuis quelques mois seulement (nous avions même eu la chance d’assister aux réunions de pré lancement-test- du site web en août 2012 à Washington). Lors de notre exploration de cette plateforme, nous étions pourtant passés à côté de ce post très intéressant qui présente (et “justifie” dans un sens) le travail et les réflexions des ingénieurs de Commotion (projet mené par l’Open Technology Initiative au sein de la New America Foundation à Washington DC voir ce précédent billet).

Ce texte a plus d’un intérêt car il montre de quelle manière le Work Department (auteur du billet en question) du projet Commotion considère  l’utilisation d’architectures distribuées comme une plus-value. Il cherche également à expliciter le choix de développer des services distribués ET mobiles dans une ville telle que Détroit. Il montre enfin l’attention portée par ces ingénieurs à l’écologie globale dans lequel le système technique qu’ils développent prend place:

 

As we worked through other parts of the Commotion project, we brainstormed ideas for wireless mesh applications. We noticed that our ideas would often replicate existing web services — e.g. a local fileserver for music or movies, or a local message board for neighborhood discussion. We began to wonder what would make a community wireless application more appealing than using a centralized Internet-based application. We agreed that it wouldn’t be enough to offer someone the simple satisfaction of knowing their data is decentralized… there would need to be some other benefits to using a local application.

What would these benefits be? What is special about the architecture of a community wireless mesh network? In pondering these questions, we considered what is provided by these networks — earlier, I mentioned that the networks provide internet connection sharing and local file sharing, but that’s only a part of the story. These networks also provide something much grander: they become community institutions. Unlike the Comcast hardware that is bolted out of arm’s reach on a utility pole, our community wireless equipment lives on our porches, in chicken coops, in our bell towers, and next to our desks. Each piece of equipment has a story behind it. We know who held the ladder while it was being installed and who lent their hammer drill to run a cable up to it.

A community mesh wireless router’s IP address is more than a 32-bit number. It has history and meaning. How can we build applications that reflect and enhance this?

 

LIRE LE TEXTE COMPLET ICI (https://commotionwireless.net/blog/exploring-meshaging)

 

Références:

  • http://oti.newamerica.net/
  • https://commotionwireless.net/
  • http://detroitdjc.org/

 

 

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

Réseaux distribués communautaires, vers une nouvelle pédagogie?

De quelle(s) manière(s) les débats liés à la liberté de communication sur Internet mettent en scène un « conflit de légitimité démocratique » ?

C’est à partir de cette question que Félix Tréguer analyse la façon dont s’affrontent deux camps censés pour l’un :

« Défendre l’application du droit positif à Internet au nom du primat de la démocratie représentative (et de ses institutions, législatives ou judiciaires par exemple) »

et de l’autre,

« revendiquer des pratiques communicationnelles en marge de la légalité, remettant en cause le droit de la communication au nom, justement, des valeurs démocratiques »

     Référence : Internet, espace d’une citoyenneté insurrectionnelle sur le blog Mediapart Internet & Démocratie, le 3 septembre 2012

Partant de cet exemple et de cette « division latente de tous les débats relatifs à la liberté d’expression sur Internet », Félix Tréguer[1] construit un argumentaire éclairant sur nombre de questions qui nous intéressent au sein de ce projet de recherche notamment celle de savoir par quels biais des citoyens revendiquent des pratiques « démocratiques » à partir de différents services web (décentralisés). En effet, beaucoup de discussions relatives à la liberté d’expression sur Internet s’appuient (uniquement) sur « les débats autour du partage non autorisé d’œuvres soumises au droit d’auteur que de nombreux législateurs et juges cherchent à combattre mais qu’une partie de la population revendique comme une pratique démocratique » (voir notamment les travaux de Philippe Aigrain et les initiatives de la P2P Foundation). Depuis 1999 et la naissance de Napster qui leur a donné une visibilité grand public, les technologies peer-to-peer ou réseaux informatiques distribués ont été « considérés presque exclusivement comme des menaces pour l’industrie des contenus numériques. L’usage principal de ces réseaux par le public étant le partage non autorisé de fichiers musicaux ou vidéo, le problème du droit de propriété intellectuelle, du droit d’auteur notamment, s’est imposé en tant que cadrage médiatique et politique prédominant des réseaux P2P et de leurs usages » (Musiani, 2011). En effet, leurs qualités intrinsèques, en dehors du débat relatif aux données qui circulent sur ce type de réseau, ont souvent été ignorées et la question de leurs places face à une pensée de l’informatique centralisée (les «géants » de Francesca Musiani[2]) s’est bien souvent retrouvée mise à l’index…

“Sharing” – Toronto, Ontario – http://tobanblack.net/blog/ – (CC BY-NC 2.0)

Pourtant, exceptée cette problématique de la propriété des données (sécurité, emplacement des datas) et de leurs utilisations par des tiers (privacy), les « vertus » (Elvin-Koren, 2006), les façons de « faire réseau » de ces technologies distribuées sont aujourd’hui en mesure de nous faire repenser la définition même du politique et de sa pratique. Les cadrages médiatiques et politiques des réseaux distribués pourraient peut être même évoluer et on pourrait trouver de nouveaux éléments de réponse au conflit de légitimité démocratique évoqué précédemment.

Pour en revenir à la problématique de ce billet ; au delà de pointer les rapprochements possibles entre les « manifestations de la citoyenneté insurrectionnelle (terme emprunté à James Holston) dans l’espace urbain et les formes de résistances dont Internet est le théâtre », les questions que soulève Tréguer interrogent la constitution même de nos infrastructures communicationnelles et déplacent le débat sur un plan qui nous semble plus intéressant:

Comment ces infrastructures « font politique »[3] ? En d’autres termes, de quelle manière représentent-elles une certaine structure et un fonctionnement (méthodique, théorique et pratique) d’une communauté, d’une société, d’un groupe social ?

Pourquoi passent-elles aujourd’hui pour des technologies disruptives alors que le principe d’architecture distribuée existe depuis longtemps (voir toujours) et supporte des services web bien connus qui ne semblent pas vraiment « disruptifs » ou ne posent pas de problèmes juridiques particuliers (Skype en tête qui repose en partie sur une architecture distribuée)?

Ce type de questions sur les artefacts et leurs aspects politiques n’est, lui non plus, pas nouveau[4]. Ainsi, dès 1989, Madeleine Akrich[5] soulignait l’importance de regarder les choix techniques sous jacents aux architectures des technologies qu’elle observait en Afrique et ailleurs dans le monde (Akrich, 1989). En 1978, après une année d’immersion dans les usines Citroën parisiennes, Robert Linhart[6] évoquait quant à lui le rapport que les hommes entretiennent entre eux par l’intermédiaire des objets : ce que Marx appelait les rapports de production. Il dénonçait dans cette enquête de sociologue embedded la « dictature des objets », la violence du lieu-usine en lui même et des objets techniques qui le constituent (on pourrait remonter ainsi de suite la chaine des auteurs divers et variés qui ont étudié les objets non-humains et leurs rôles d’acteurs au sein de systèmes sociotechniques…). Beaucoup de chercheurs en SHS / STS ont donc « calmé » quelque peu certaines ardeurs techno-optimistes qui voulait attribuer des fonctions particulières à diverses technologies et/ou en masquer d’autres…

Mais aujourd’hui, après avoir vu apparaître en 2011 de nouvelles « arabités numériques » (Gonzalez-Quijano, 2012) et différents mouvements indignados, occupy ou anonymous qui ont été prétextes à de nombreuses analyses autant optimistes que pessimistes à propos de l’avenir des mouvements citoyens et des « citoyennetés insurrectionnelles » abordées précédemment ; il semble encore plus important de porter un regard clair et objectif sur l’architecture même des services web, sur ce qu’ils permettent, ce qu’ils changent et comment leurs usages changent eux mêmes en retour (à l’heure où j’écris ce billet, owni.fr vient de publier un article à propos du rôle de « l’infrastructure internet » lors de la toute récente campagne présidentielle américaine menée par le camp démocrate et qui a vu la victoire de son candidat Barack Obama. L’intérêt pour ce type de question et l’analyse de ces objets vis à vis de la chose politique semble donc croissant…).

Prenant comme point de départ les discours portés sur les origines numériques des soulèvements arabes de 2011 et le rôle qu’auraient pu y jouer des architectures distribuées, notre travail de thèse consiste à comprendre les dynamiques sociopolitiques inhérentes à une technologie particulière, celle des réseaux distribués mobiles (MESH, MANET) qui correspondent à une architecture informatique décentralisée réellement « mobile » et « mouvante ».

En ce sens, le projet Commotion ou Internet in a suitcase révèle à bien des égards un nombre important d’enjeux inhérents à cette étude des réseaux distribués mobiles. Comme le soutiendra très prochainement Benjamin Loveluck (Cf. note 3), Internet peut-être compris comme un « libéralisme informationnel » qui engendre différentes formes d’auto-organisation. Concernant Commotion et pour étudier de près les acteurs de ce projet et l’ensemble de ses ramifications ; on peut déclarer d’ores et déjà que ce dernier se définit comme une boite à outils et non pas comme une « killer application » mobilo-distribuée qui révolutionnerait le monde de l’information et de la communication.

 

Commotion (COMMunity Open Technology Information Online Network selon la définition de Josh King[7]) est avant tout et pour l’ensemble de son équipe de développement une boite à outil qui rassemble différents principes d’utilisation des NTIC, et notamment celui d’auto-organisation et de plateforme commune uniformisée:

« It’s to pull together all these sort of existing mesh networking open source wireless mesh networking technologies and making sure that they’re easy and secure in writing graphical interfaces and documentation and adding in security encryptions. And the end point of the project is a series of sort of software application bundles for different platform » ;

« Envisionned it as a toolkit or Platform for people to be able to build a wide variety of different network ; […] Commotion is also something that provides an opportunity to do education and outreach around this idea of owning your own infrastructure. […] It’s not just a soft, it’s really a set of principles that are about autonomy and ownership your own communication capabilities » ;

« I think that I view the potential of Commotion as really a way to democratize infrastructure but in a different sense from the circumvention sense »

     (INTERVIEWS Technical & Field Team – Commotion Project – Open Technology Initiative – New America Foundation – Washington DC – Août 2012 – François Huguet)

Pourtant, Commotion est presque toujours mis en avant comme une technologie disruptive (surtout dans les discours journalistiques de 2011, année des soulèvements arabes) intervenant en temps de crises, de blocages, de censures, de guerres, de révolutions:

« Internet in a Suitcase is basically a software program aimed at giving people in conflict or disaster zones the ability to establish a secure, independent wireless network over their computers and cell phones. While the system (which, despite its name, involves neither hardware nor a suitcase) is being tested and is usable right now, [Sascha] Meinrath [directeur de l’Open Technology Initiative à la New America Foundation] and his team of developers around the globe are holding off on releasing it to groups like the Syrian rebels until they are confident that it can resist large-scale hacking by governments. »

     (‘Internet in a Suitcase’ ready for field testing, Foreign Policy, John Reed, 5 Novembre 2012 ;

    Concernant les présentations de Commotion par des journalistes, voir également les articles de Yves Eudes dans Le Monde (30 août 2011), « Commotion, le projet d’un Internet hors de tout contrôle », celui de Chloé Woitier dans Le Figaro (1er septembre 2011), « Commotion, l’accès libre et anonyme à Internet » et celui de John Markoff et James Glanz dans The New York Times (12 juin 2011) « U.S. Underwrites Internet Detour Around Censors ».

Partant de ces faits, on pourrait presque supposer que c’est de cette manière que l’Open Technology Initiative a pu faire financer le projet Internet in a suitcase – Commotion : en « surfant » en quelques sortes sur le 21st century statecraft, qui, selon les mots de la journaliste de The American Prospect Nancy Scola « Hoping to spread the American tech gospel to the rest of the world »… Mais ce serait réduire Commotion à un simple projet cyberdiplomatique américain et les choses ne sont pas si simples (voir notamment les billets publiés sur ADAM à propos du contrôle des infrastructures et celui qui mentionne l’intervention de Ben Scott à l’International Summit for Community Wireless Network)… Alors à quoi renvoie cette communauté « imaginée » pour reprendre le terme de Bénédict Anderson (1996), utilisatrice de réseaux distribués mobiles, idéaux types de l’auto-organisation sur Internet ? De quoi ces réseaux distribués sont-ils le nom ? Les États Nations pousseraient-ils au développement de tels outils pour modifier les formes d’application de la démocratie ?

La réponse se trouve peut être dans les vertus de cette citoyenneté insurrectionnelle qu’évoque Félix Tréguer après sa lecture de Holston, dans la nécessité de « renouer avec une approche ethnologique dans la politique de la ville, et se nourrir des contestations citoyennes pour engendrer une dynamique de changement social ». La ville, l’espace urbain, apparaît donc comme une composante essentielle et nécessaire de ce type de réseaux et de cette catégorie de soulèvements[8]. Manuel Castells tenait d’ailleurs des propos similaires en juin 2011[9] à propos de cette urbanité, initiatrice de nouvelles formes de communauté à liens plus ou moins faibles mais hyper-connectées:

« cette fois-ci les réseaux numériques pourvoient des formes flexibles et changeantes d’organisation et de débat, d’appel au secours, de distribution d’idées et d’initiatives, de décision collective distribuée. Les braves gens du mouvement ne sont jamais seuls, sont toujours connectés, et donc, ensemble n’ont pas peur. Leur mot : « Toutes ensemble, nous pouvons ». Pouvoir quoi ? Pour le moment, dans le discours du mou­vement, l’essentiel est de pouvoir être ensemble et, ensemble, découvrir une autre démocratie che­min faisant. »

Des propos proches de ceux que tiennent les instigateurs du projet Commotion, lui-même financé par le département d’État américain… Reste donc à chercher le lieu et le moment où cette réflexion sur les architectures, sur leur nécessaire décentralisation[10] s’articule avec la dimension critique et propositionnelle des « insurgés citoyens », l’instant où l’on comprend, ne serait ce qu’un peu, la matérialité des réseaux que nous formons et dont nous sommes des nœuds. Mais quid des compétences requises de la part de ces citoyens (et d’un appareil étatique[11]) en vue d’une participation active dans le mouvement social de transformation, dans cet « être ensemble » nouveau? Quelles compétences doit avoir un usager grand public dans un tel cadre d’utilisation de plateformes d’ « insurrection citoyenne » (on pourrait prendre comme exemple les « cosmopolitismes » multiples que sont les mouvements indignados, occupy, anonymous) ?

D’ores et déjà, il nous semble que les pistes soulevées par Serge Proulx[12] semblent intéressantes à plus d’un titre et posent des questions nécessaires au franchissement de cette étape que représente le devenir citoyen insurrectionnel « médié » par des réseaux distribués mobiles :

« Il apparaît évident que nous sommes dans une époque où triomphe l’industrie de l’infotainment. Les utilisateurs sont très majoritairement des consommateurs plus ou moins passifs des dispositifs numériques. Il reste que si les TIC doivent être pensées comme moyens pour favoriser l’émancipation sociale et maximiser la puissance d’agir des citoyens – qu’ils soient du Sud ou du Nord – il apparaît nécessaire de penser les potentialités civiques de la communication numérique à l’extérieur du cadre exclusif de la consommation (Proulx et Klein, 2012). Le défi pour la démocratisation technique consiste à chercher une troisième voie… Comment approcher des usagers qui se déclarent « intelligents » et « politiquement engagés » mais qui ne veulent pas nécessairement trop investir dans l’apprentissage technique des machines ? Voilà le défi. » (Proulx, 2012).

Et lorsque l’on défini les réseaux distribués mobiles (communautaires) comme étant quelque chose en mesure d’apprendre sur l’idée d’infrastructure à ses utilisateurs et non plus comme un simple logiciel; quand ce même quelque chose se donne à voir et à entendre comme une série de principes à propos de l’autonomie des réseaux de communication, sur la possibilité de redessiner en “relocalisant” l’information, en la distribuant d’une autre façon et en apprenant aux utilisateurs la façon dont ça “circule” dans ces réseaux ; on s’approche peut être de cette troisième voie…

 


[1] Réflexions et articles à retrouver sur le carnet de recherche de Félix Treguer We the Net, espace de réflexion sur les enjeux démocratiques liés à la protection de la liberté de communication sur Internet. Félix Tréguer est doctorant à l’EHESS Paris.

[2] Francesca MUSIANI (2012), Nains sans géants, architecture décentralisée et services Internet, thèse de doctorat (dir. Cécile Méadel, CSI-ENSMP), Ecole nationale supérieure des mines de Paris.

[3] Sur ce sujet, nous suivons avec attention les travaux de Benjamin Loveluck, qui soutiendra sa thèse de doctorat intitulée La liberté par l’information. Généalogie politique du libéralisme informationnel et des formes de l’auto-organisation sur internet le 4 décembre 2012 à l’EHESS Paris.

[4] Voir notamment Winner, L. (1986), « Do artifacts have politics ? », The whale and the reactor : a search for limits in an age of high technology, University of Chicago Press, Chicago, p. 19-39 ; Cooper, G., Woolgar. S. (1999), « Do artefacts have ambivalence : Mose’s bridges, Winner’s bridges and other urban legends in S&TS », Social Studies of Science, SAGE, Londres, p. 433-449.

[5] Akrich, M. (1989), « La construction d’un système socio-technique. Esquisse pour une anthropologie des techniques », Anthropologie et Sociétés, Volume 13, numéro 2, 1989, p. 31-54 ; voir aussi Akrich, M. (1987), « Comment décrire les objets techniques ? », Techniques et Culture, vol. 9, p. 49-64.

[6] Linhart, R. (1978), L’Etabli, Editions de Minuit, Paris.

[7] Technical leader à l’Open Technology Initiative, « codeur » de Commotion.

[8] Notons néanmoins que les révoltes arabes, notamment en Egypte s’inscrivent dans une long processus de contestation et de revendications démocratiques débuté en 2004 par des grèves massives d’ouvriers du textile dans une (petite) ville du delta du Nil (Mahallah el-Kubra) et par une « révolution médiatique » débutée quant dans les années 1990 (pareil pour la Tunisie ou le combat ouvrier et syndical (dans des petites provinces) avait fait remonter depuis de nombreuses années des aspirations à plus de démocratie et une remise en cause du système Ben Ali). La révolution arabe de 2011 ne s’est pas faite dans un contexte vierge de toute revendications civiques, politique et socio-économique. Pour de plus amples précisions, voir les travaux de Yves Gonzalez-Quijano et de Tourya Guayybess sur la télévision satellitaire et les mutations médiatiques du monde arabe depuis l’arrivée de l’imprimerie.

[9] Castells, M. (2012), « Ni dieu ni maitre: les réseaux », FMSH-WP-2012-02.

[10] Dans l’ensemble de nos entretiens avec les équipes de Commotion (Field team, Policy team, Technical team, août 2012, Washington DC) les personnes interrogées mettent presque tout le temps en avant les aspects « décentralisés » de leur software, la « plus value » que cela apporte à leurs travaux. Pareil concernant les militants des mouvements occupy européens qui utilisèrent et développèrent durant l’année 2011, des réseaux sociaux décentralisés tels que Lorea et Crabgrass voir Gentès, A., Huguet. F, (2012) “Les alternatives aux réseaux sociaux –  l’architecture distribuée et le design de médias” in Stiegler, B. 2012 Réseaux sociaux, culture politique et ingénierie des réseaux sociaux, Paris, FYP.

[11] « Libérer » le spectre radio pourrait être un vecteur d’innovation à ce niveau là comme l’a été la « libéralisation » de la bande FM dans les années 80. Voir notamment la tribune de Félix Treguer et Jean Cattan (2011) à ce sujet : Le spectre de nos libertés

[12] Proulx, S. (2012), La Puissance d’agir des citoyens à l’ère du numérique : cyberactivisme et nouvelles formes d’expression politiques en ligne, in S. Najar, dir., Mouvements sociaux en ligne et cyberactivisme en Méditerranée, Karthala, Paris.

 

 

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

Compte rendu de l’International Summit for Community Wireless Network (Barcelone, octobre 2012)

Barcelona bathed in golden light – Marcel Germain – wirelesssummit.org

Du 4 au 7 octobre 2012, se tenait à Barcelone le sommet international des réseaux communautaires sans fils (International Summit for Community Wireless Network #IS4CWN).

Quatre jours durant, l’Université Polytechnique de Catalogne a vu défiler une partie importante des acteurs à la fois professionnels et institutionnels de ce type de réseaux distribués qui se révèlent être bien plus que de simples technologies différentes de celles dites  “centralisées”.

Cet évènement, deux ans après le sommet tenu à Vienne en 2010, était une occasion importante pour l’ensemble des acteurs de ce secteur de se rencontrer, de consolider des relations déjà existantes et de mutualiser des actions globales de concertation et de communication de la communauté “MESH network, MESH technology”.

Tour d’horizon de ces quelques jours de “réseautage” au sens propre comme au sens figuré…

Making Policy Matter

Le sommet s’est ouvert le jeudi 4 octobre 2012 sur diverses présentation notamment celles de Sascha Meinrath (directeur de l’Open Technology Initiative – New America Foundation, l’un des fondateurs du projet Commotion) et de Ben Scott (Senior Advisor de l’Open Technology Institute et Visiting Fellow au Think Tank berlinois Stiftung Neue Verantwortung). Ce dernier faisait notamment partie de l’équipe rassemblée en 2009-2010 autour d’Hillary Clinton au sein du département d’État américain qui visait à établir le “21’st Century Statecraft” (qui de façon très schématique pourrait se définir comme le programme de réflexion autour de l’utilisation des nouvelles technologies en termes de “développement” et de “diplomatie”, voir à ce sujet les travaux de Laurence Allard, Univ Paris 3 IRCAV & enseignante Lille 3).

Scott rappela dans sa présentation le rôle et le regard qu’ont les États sur le développement technique. À partir de son expérience, il présenta la manière dont des observatoires nationaux jouent un rôle au sein des processus d’innovation et de développement technique au sein de leurs pays mais aussi en dehors de leurs frontières. Très intéressante, cette communication rappelait au passage les déclarations d’Hillary Clinton à propos des technologies de l’information et de la communication:

We are working at the State Department to ensure that our government is using the most innovative technologies not only to speak and listen across borders, not only to keep technologies up and going, but to widen opportunities, especially for those who are too often left on the margins.

agenda-setting 2009 – July 15 speech to the Council of Foreign Relations, Hillary Clinton (http://prospect.org/article/next-diplomatic-cable)

L’intervention s’acheva sur une mise en perspective de ces logiques de partenariats Etats-fondations-entreprises-projets de recherche-initiatives privées. Les mots clés du sommet commençaient à être dans toutes les bouches: Civil society, building networks, empowering communities, educating governments, shaping markets through good public policy… L’idée de décentralisation des services internet était aussi très présente dans le discours de Scott, tout comme le terme d'”Internet du futur”.

Crédits photos: François Huguet

Le reste de la journée permit d’introduire l’ensemble des personnes présentent au sommet et de présenter, chacun leurs tours (lors de Community Network Lightning Talks) leurs projets, la spécificité de leurs déploiement, des “écologies” socio-techniques dans lesquelles ils s’installent, le type de communautés dans lesquelles leurs réseaux distribuées sont implantés ou souhaitent s’implanter et les enjeux relatifs à ces déploiements (sécurité, redistribution des accès à Internet, innovations et comportements inattendus suite aux premiers test, etc.). Nous vîmes défiler nombre de projets notamment l’Athens wireless Metropolitan Network, Freifunk, Funkfeuer, Wlan slovenija, Arig, Commotion, Guifi, et Serval (il était malheureusement impossible d’assister à toutes les interventions étant donné que plusieurs ateliers se déroulaient en même temps).

Un “Hacklab” visant à s’échanger et à configurer diverses machines informatiques était aussi installé dans les sous sols de l’UPC. Sur les tables de ce laboratoire plus ou moins improvisé trônaient des routeurs, des ordinateurs, des fers à souder, des smartphones, des antennes radios, des tupperwares… Nul doute que beaucoup de réseautage s’est fait autour de tous ces objets et du “bidouillage” qu’on y  effectue bien souvent lorsqu’il s’agit de créer des réseaux communautaires sans fils… Mon ordinateur dont le logo de la marque est une pomme m’attira de cordiales railleries et des incitations à me tourner vers le libre, l’open source. Mais il me permit aussi de récolter d’autres “mots clés”:

DIY, Open Source, Open Standard, build your own infrastructures, your antennas, your interfaces, etc…

Crédits photos: François Huguet

La première journée du colloque s’acheva sur une communication d’Amelia Andersdotter, eurodéputée et membre du Parti pirate suédois (également directrice de l’EPFSUG, European Parliament Free Software User Group). Andersdotter évoqua le programme européen à propos du spectre radio et les discussions qu’il a suscité au sein des délibérations du parlement européen (tout est encore et malheureusement très flou… Sur ce sujet, suivre les actions de La Quadrature du Net et de Félix Treguer, à la pointe (entre autres) de l’actualité sur les négociations autour du spectre radio au niveau français et européen).

Collective Awareness Platforms for Sustainability and Social Innovation & White Spaces

La journée du vendredi 5 octobre débuta sur une intervention de Fabrizio Sestini (Scientific Officer, European Commission DG Information Society), responsable de l’initiative et de l’appel à projets “Collective Awareness Platforms for Sustainability and Social Innovation (CAPS)” à la Commission Européenne (DG Communications Networks, Content and Technology). Sestini présenta les grandes lignes de cette initiative en mesure de financer nombre de projets qui étaient présents au sommet tout en interrogeant les innovateurs sur leurs logiques et leurs dynamiques de recherche. Un moment parfait pour de jeunes projets d’arriver à trouver des financements européens à la hauteur de leurs besoins. Derrière cette intervention, différents ateliers étaient proposés notamment un à propos de l’utilisation des fréquences blanches du spectre radio.

L’après midi était consacré à l’étude de plusieurs cas de développement notamment en cas de crises humanitaires (déploiements de réseaux MESH à Haïti et en Colombie, enjeux, perspectives et développements). Les retours critiques des différents projets étaient très intéressants à observer mais encore une fois, les discussions tournaient autour des protocoles de routage, de la façon dont on peut les améliorer, les rendre plus stables, éviter les pièges dus à des configurations géographiques particulières, etc.

 

Crédits photos: François Huguet

Epic Fail & Toolkits for Community Outreach and Organizing

La journée du samedi fût pour moi et mes intérêts de recherche, la plus intéressante de ce “colloque-sommet professionnel”. Il s’agissait ici de retours d’innovateurs sur leurs expériences, sur les erreurs commises lors de ces recherches, sur les “epic fail” de leurs premiers tests, et des premiers déploiements de réseaux sans fils décentralisés à travers le monde. Outre le fait de porter un regard plein d’humour sur leurs expériences, l’idée de cette communication réflexive était de mettre en avant les erreurs à ne pas commettre et à lier les différents projets entre eux. Effectivement, l’ensemble des militants des “community wireless networks” semblent avoir en commun le fait de s’être déjà électrocutés en branchant des antennes radio sur le toit de leurs immeubles ou d’être tombés d’une échelle en essayant de fixer des routeurs sur le mur de leurs maisons… Crédits photos: François HuguetL’objet technique créé par ces communautés de développeurs étant quelque chose d’assez “matériel” lorsqu’il s’agit d’installer des antennes sur le toit de bâtiments, on notait ici tout comme dans le hacklab, un intérêt très important pour le matériel et pour la “tangibilité” des infrastructures de communication.

Suite à cette première discussion, des ateliers se mirent en place et je suivis celui de Greta Byrum, Nina Bianchi et Jonathan Baldwin intitulé “Toolkits for Community Outreach and Organizing”. Après nous avoir présenté les actions de la Detroit Digital Justice Coalition (voir également l’article de Stéphanie Vidal à propos des nouvelles dynamiques de rénovation urbaines basées sur le développement du numérique à Détroit) et le travail de Jonathan Baldwin, développeur d’une application MESH de cartographie collaborative dans le quartier de Red Hook (Brooklyn, NYC), les discussions se transformèrent en atelier de simulation des activités qu’ils réalisent avec leurs publics lors des différentes actions de sensibilisation.

Effectivement, Baldwin, Byrum et Bianchi sont plus des pédagogues et des militants que des techniciens acharnés qui veulent développer LA killer application et LE business model capable de révolutionner le marché de l’Internet. Proches de tout les mouvements Maker faire, Do it Yourself, Community working, etc. leur objectif est plus social et civique que purement économique. Nous discutâmes longuement de leur référentiels professionnels et d’autres termes firent leur apparition au fil des discussions: John Dewey, pragmastisme, pédagogie active

Chez eux, comme chez la plupart des personnes que nous avons interrogé lors de ce sommet où lors de notre enquête au sein de l’Open Technology Initiative à Washington, l’outil technologique (les technologies MESH, ici le software commotion) est plus considéré comme un moyen que comme un fin, l’objectif est de reconstruire un lien qui s’est brisé en quelque part et le cas de la ville de Détroit est assez édifiant: le gouvernement fédéral s’était totalement désengagé de l’écroulement économico-industriel de la ville… On parle donc de boite à outils pour reconstruire, bricoler des choses qui se sont un brisées… A commencer par le lien social. Notons qu’en termes de références et au détour d’un échange, un des intervenants fit référence au travail de David Simon à propos de Baltimore et de la Nouvelle Orléans, deux villes en souffrance analysées dans les séries télévisées produites par HBO et qui sont un pur exercice d’ethnographie urbaine. Les références sont à chercher dans des travaux plutôt atypiques donc…

Le MESH est elle une “technologie pansement”, un outil de développement civique créateur de nouvelles formes de communautés d’utilisateurs plus autonomes vis à vis de leurs structures de communication? La chose reste à vérifier mais ce sont les lignes qui émergent de ce type de rassemblement. Lignes que l’on retrouve dans beaucoup de “mouvements socio-culturels” actuels qui ont pour objet de mobilisation et d’action les Nouvelles Technologies de l’Information et de la Communication: Fab Labs, Hack Labs, Maker faire, DIY, Anons, etc.

 

 

Détails références:

  • Nancy SCOLA – The Next Diplomatic Cable – http://prospect.org/article/next-diplomatic-cable
  • Laurence ALLARD – La diplomatie du téléphone portable à la conquête des pauvres – http://www.monde-diplomatique.fr/2012/05/ALLARD/47679
  • Laurence ALLARD – Du Coca au Nokia ? Smart power, philanthrocapitalisme et téléphonie mobile – http://www.mobactu.fr/?p=396
  • Stéphanie VIDAL – Du sans fil pour recoudre Detroit– http://www.slate.fr/story/43031/detroit-wifi

François Huguet

PhD student in Communication Studies at the Codesign Lab & Media Studies at Telecom ParisTech. Supervisor: Annie Gentès / Co-supervisor: Jérôme Denis

More Posts - Website

Follow Me:
TwitterDelicious

Grey Areas in Peer Rental Insurance Begin to Clarify

Les polices d’assurance actuelles sont-elles au pas avec les temps – en particulier, avec l’autopartage en peer-to-peer, de plus en plus populaire?

 

Liz Fong-Jones joined car sharing program Relay Rides because her car was sitting parked most of the time. An environmentally-minded M.I.T student and one-time Google employee, she saw that by renting it out, she could maximize the car’s use and potentially lessen the number of cars on the road. What she didn’t see was that she was about to become the subject of a debate about insurance and liability in the sharing economy. The man who rented Fong-Jones’s car was found at fault in an accident in which he was killed and four people in the other car were seriously injured. Insurance claims may exceed Relay Rides’ million dollar policy.

Commercial use of a personal vehicle is generally not covered by basic auto insurance and in most places, companies reserve the right to cancel or non-renew customers who rent their vehicles out. California, Washington and Oregon have all passed legislation that specifically prohibits insurance companies from canceling insurance policies and takes liability off of car owners who are car sharing. In states where no legislation has been passed, liability enters a grey area if insurance doesn’t cover car sharing and a claim exceeds the car sharing company’s insurance.

Shelby Clark, CEO and chief community officer of Relay Rides feels that an accident in a car sharing vehicle would be treated like an accident in any other vehicle; that liability would rest on who was at fault. In such a case, when damages exceed coverage, one of two things happens: there’s a settlement for the insurance limit or else they go after the person at fault’s estate, which may result in the claim going unpaid or the creation of a payment plan.

Using a personal car for commercial purposes is nothing new in the insurance world. Pizza delivery businesses and real estate agents do it all the time – the individuals or businesses simply add additional coverage to their policy. What is new is the idea of people renting out their vehicles. Insurance companies don’t prohibit you from renting your vehicle, they just don’t cover it, and they reserve the right to cancel or non-renew insurance policies if a personal vehicle is being rented out.

There is no independent data being collected on this right now, but according to Clark, insurance companies are not canceling or non-renewing policies of customer who rent their personal cars out.

“People are already using their cars for commercial purposes and they’re not canceling insurance policies, mainly because it’s for a risk that they don’t cover,” he says. “Why would you turn away paying customers over a risk that you don’t have exposure to? An insurance company has the right to cancel your insurance policy if you rent out your car, but we think it’s very unlikely that that would happen.”

[…] the insurance arrangement for peer-to-peer car sharing in the U.S. could be much better.

“I think other countries are doing an awesome job working with insurance companies and offering insurance in a much better way than in America,” says Kohli. “In Australia and Europe, are the ones who are providing insurance on behalf of the insurance company. If the car owner wants to rent their vehicle out, they have to buy insurance from the car sharing company on behalf of the insurance company, at a higher price. This way,” he continues, “the insurance companies are more liable to participate because now they’re getting all these cars shifting to their company and they’re getting the higher cost. That is a really good model.”

 

Lire la suite sur Shareable.

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

De Boston à Paris, l’ “autopartage peer-to-peer”

La meilleure application “sociale et locale” aux App Awards 2012? Il s’agit de la “porte d’entrée” à Buzzcar, système d’autopartage (ou car sharing) distribué fondé en 2011 à Paris par l’Américaine Robin Chase. Si SETI@home, le pionnier des applications de calcul distribué, mettait à profit les cycles processeur et la capacité de calcul non utilisée des ordinateurs participants, Buzzcar propose de mettre à profit les voitures des appartenants à la communauté aux moments où elles ne sont pas utilisées.

L’idée centrale de Buzzcar est, comme les services de partage de voitures traditionnels, de proposer à ses membres de louer des voitures à l’heure ou à la journée, et donc de payer selon l’usage. Par ailleurs, les ressources du système ne sont pas des voitures spécifiquement achetées par l’entreprise et destinées à la location de la part des clients, mais les voitures des clients elles-mêmes, mises à disposition quand ils le souhaitent. L’originalité du système tiendrait à un mélange de facteurs environnementaux (pas d’ajout de nouveaux véhicules sur la route, pas d’émission de CO2 à la production, et pas de places de parking supplémentaires), d’amélioration de choix (plus de variété, d’accessibilité et de proximité possibles si suffisamment de voitures et d’utilisateurs participent au programme), et surtout, de contrôle de l’utilisateur (il n’est pas nécessaire d’attendre qu’une entreprise de location de voitures s’installe dans sa ville, mais le service d’autopartage peut être créé en inscrivant sa voiture à Buzzcar et en invitant ses amis à rejoindre la communauté).

Pourquoi la France? Robin Chase – déjà PDG de la startup de ride-sharing GoLoco, basée à cambridge, Massachusetts – considère que le marché européen est beaucoup plus “prêt” pour des modifications radicales dans le système de transport que les Etats-Unis. Buzzcar est censée être un mélange des efforts précédents de l’entrepreneure américaine, Zipcar – un système de location “by the hour” de voitures distribuées dans nombre de stations autour de la ville – et GoLoco – un service où les personnes paient pour être transportés par d’autres voitures dans le réseau, dont les conducteurs perçoivent une rémunération monétaire. Buzzcar est une start-up construite sur l’utilisation des “idle cars” de certains utilisateurs par d’autres utilisateurs – sur la base d’un système de rémunération et compensation similaire. Robin Chase décrit son service ainsi:

“It’s peer-to-peer car-sharing. We’re leveraging excess capacity of individuals and giving them a platform for participation,”

Selon elle, le monde n’était pas encore prêt à exploiter pleinement l’autopartage, mais beaucoup de choses sont arrivées au cours des années plus récentes pour le rendre plus faisable (et profitable). La technologie des smartphones rend possible et quasi-instantanée toute opération de réservation – y compris d’une voiture. Robin Chase a donc dédié une attention spéciale à la facilité de téléchargement et d’usage de l’application Buzzcar, pour minimiser les barrières à l’entrée – réelles ou perçue – pour les utilisateurs. L’essor des réseaux sociaux et des services Web a favorisé la culture du partage et de l’échange d’information, non seulement entre amis, mais aussi entre personnes qui ne se connaissent pas. Si les deux tendances ne manquent pas de susciter des interrogations concernant la vie  privéee des utilisateurs, il est indéniable que le car sharing distribué soit en train de prendre de l’aile grâce à cette double dynamique, comme montre le développement du “cousin” de Buzzcar, l’anglais RelayRides, et nombre d’autres projets semblables.

Le marché européen, selon Robin Chase, est bien plus propice que les Etats-Unis à ces nouveaux avancements dans l’autopartage, pour des raisons qui ont trait, en tout premier lieu, à la motivation des usagers. Elle remarque:

“For car owners, the market is people who are looking to save some money on their car […] The cost of owning and operating a car in Europe is maybe 20 percent higher than in the U.S. So therefore I have more motivated owners. If we look at the drivers, the market is people who can be car independent, very much like Zipcar. Out of any 100 people, there’s a higher percentage of those kind of people in France than there are in the U.S.”

La preuve la plus marquante de cette culture du distribué et du partage serait le système Vélib’ et son “successeur” pour les voitures, Autolib’. Robin Chase conclut:

“I want Buzzcar to be here and be part of that ongoing national conversation […] The idea of Buzzcar is the power of many. The company will provide a network of different kinds of cars in different locations to be used by different people in a wide variety of ways. And allowing people to get the car they want, where and when they want it, and to pay only when they use it, will change the way people think about cars.”

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website

Freedom Box, la brique d’une architecture de liberté?

La sixième conférence re:publica se conclut aujourd’hui à Berlin, et les réseaux sociaux alternatifs et décentralisés sont à l’honneur. Ils l’ont notamment été dans l’appel qu’Eben Moglen, professeur de droit et d’histoire à l’Université de Columbia, a lancé hier : c’est le moment, a-t-il dit, de ne pas reculer face aux problématiques les plus pressantes des réseaux sociaux sur l’Internet d’aujourd’hui: la surveillance, les atteintes à la liberté de pensée, et les limitations de la vie privée des individus.

Selon M. Moglen, les réseaux sociaux comme Facebook, les moteurs de recherche comme Google, ou les centres commerciaux en ligne comme Amazon seraient en train de « consommer » les utilisateurs. Les gouvernements de toutes tailles sont désireux de réutiliser le profilage et les prédictions effectuées par ces services afin de cibler certaines activités, et sont en train d’adopter des lois qui leur permettraient de stocker ces données indéfiniment. En particulier, nos appareils mobiles sont toujours en modalité « confession 24/7 ».

« La Stasi aurait bien peu à faire cette fois, M. Zuckerberg fait le travail à sa place, » a souligné M. Moglen, en ajoutant que Facebook est un important magasin de données contenant les pensées et les comportements d’un milliard de personnes, ou des moyens directs de les reconstruire. Le discours de M. Moglen n’a pas non plus épargné Steve Jobs, le « guru » d’Apple récemment décédé, qualifié de « monstre moral » qui avait une profonde aversion pour le partage en toutes ses formes.

Eben Moglen a déclaré qu’il y a un besoin de plus en plus urgent de médias non surveillés, de logiciels libres, et de réseaux non contrôlés par les opérateurs de réseau. Sans ces possibilités, la liberté de pensée sera définitivement perdue, a-t-il dit. L’appel du professeur américain a été particulièrement bien reçu à re:publica, qui se construit année après année comme la plus importante conférence sur l’Internet « alternatif » en Europe.

La conférence re:publica est le dernier endroit où M. Moglen a présenté son projet actuel, Freedom Box – le prototype d’un système de désintermédiation des communications via l’Internet, qui devrait permettre aux utilisateurs d’entrer en contact directement. La Freedom Box Foundation compte présenter bientôt la version bêta et, à la fin de 2012, commercialiser le produit – une petite boîte métallique ressemblant à un vieux modem, qui communique d’un côté avec la prise téléphonique et de l’autre avec l’ordinateur.

La philosophie qui sous-tend le projet Freedom Box est assez simple : Facebook a recueilli l’énergie de nos désirs sociaux et nous a convaincu d’accepter un compromis excessif, en créant une structure qui donne un accès gratuit au Web et à ses contacts humains en échange d’un « espionnage » gratuit et continu. Ce qui ne veut pas dire que Facebook devrait être illégal, mais que ceux qui savent créer et développer doivent « remédier par la technique » à la situation. Selon M. Moglen, nous sommes en mesure de le faire et Facebook sera bientôt obsolète. « Il n’y a aucune raison pourquoi l’architecture d’un réseau social devrait inclure une telle invasion de la vie privée. En fait, le matériel et les logiciels nécessaires pour construire un réseau dans lequel les gens conservent le contrôle direct de ses informations, sans intermédiaires, existe déjà. Il faut juste construire un meilleur système. »

Francesca Musiani

Chercheuse postdoctorale, MINES ParisTech Yahoo! Fellow in Residence, Georgetown University

More Posts - Website