A whispered voice, a closed door, a sealed envelope –- these are all familiar, everyday techniques and technologies by means of which people routinely seek to maintain their secrets and preserve a measure of personal privacy.
Cryptography, the art of making ciphers and codes, provides an additional array of powerful techniques to accomplish the same purposes under different circumstances. Cryptographic techniques attempt to protect information from unwanted scrutiny by transforming or encrypting it into an otherwise unintelligible form called a cipher-text. This cipher-text ideally cannot be deciphered back into an intelligible plain-text without the use of a key to which only those who are the intended recipients of the information have access. The use of ever more powerful computers to facilitate the construction and application of encryption algorithms and keys has made the effort to discern the original plain-text from an encrypted cipher-text without recourse to its proper key incomparably more difficult than has been the case historically. This kind of code-breaking is called cryptanalysis. Cryptology is a more general term encompassing both cryptography and cryptanalysis.
There are two basic kinds of encryption scheme in contemporary cryptography, symmetric and asymmetric systems. In classic symmetric encryption or secret key cryptology, messages are enciphered and deciphered by recourse to a secret key available to all (but only) the relevant parties to a transaction. Such systems are called symmetrical simply because both the processes of scrambling text into cipher-text and descrambling cipher-text back to plain-text require access to exactly the same information. The obvious difficulty with such symmetric systems is their reliance on a secret key that cannot always itself be distributed with ease or comparable security. This dilemma constituted in fact one of the definitive quandaries of cryptography for centuries, but it was overcome in a series of breakthroughs in relatively recent history. The result is called asymmetric or public key cryptology.
Public key encryption, as we know it, was devised by 1976 by Whitfield Diffie (of whom Simon Singh writes: “In hindsight, he was the first cypherpunk” ), Martin Hellman, and Ralph Merkle. Asymmetric encryption schemes require not one but two keys, a public or published key available to everyone as well as a secret key known, as usual, only by deliberately chosen individuals, and often known only by a single person and never revealed to anyone else at all. These two coded keys stand in the unique mathematical relation to one another that once a text has been scrambled into cipher-text by means of the public key, it can be subsequently descrambled from cipher-text back into intelligible plan-text only by means of the private or secret key associated with it. With this breakthrough the dilemma of insecure key distribution was solved, and it became possible even for parties whose identities are secrets kept from one another to communicate and conduct transactions with one another in a way that was likewise perfectly secret to anyone but themselves.
Public-key encryption systems soon also demonstrated useful and unexpected applications that had not hitherto been associated with traditional cryptography at all. New forms of authentication for otherwise anonymous transactions were suddenly possible and soon implemented. Digital time-stamping of documents and the creation of reliable “digital signatures” for otherwise anonymous or pseudonymous participants in various kinds of transactions were among the authenticating applications that especially excited the interest of the Cypherpunks.
These applications can facilitate the ongoing protection of anonymous sources of information and whistleblowers, for example, or the otherwise difficult authentication of censored texts or politically dangerous reportage, or provide for the secure, ongoing pseudonymous disputation of experts on controversial subjects over online networks. They also make it possible to re-create the kind of anonymity that roughly prevails when one makes a purchase with cash -– a kind of anonymity that more or less evaporates once one makes the transition to purchasing by means primarily of conventional credit or debit cards or when shopping online or over the phone. Needless to say, regaining this kind of purchasing privacy isn’t only appealing for those who want to engage in illegal economic activity. It is easy enough to understand the desire for anonymity in making a mildly or even only potentially embarrassing purchase, for example, or to elude a torrent of unsolicited targeted advertising.
It remains mysterious just why the arrival of even these powerful new cryptographic applications would inspire in anybody the sense of an upcoming or impending transformation of society in the image of crypto anarchy, however. And it is interesting to note that according to Simon Singh, there is an alternative version available to the conventional history of the development of public-key cryptography itself which suggests lessons that may cut somewhat against the grain of the contours of the Cypherpunk imaginary. “Over the past twenty years,” writes Singh in The Code Book, “Diffie, Hellman, and Merkle have become world famous as the cryptogaphers who invented public-key cryptography, while [Ronald] Rivest, [Adi] Shamir, and [Leonard] Adleman have been credited with developing RSA [–- the acronym derives from their initials –-], the most beautiful [and, as it happens, influential] implementation of public-key cryptography.” Contrary to this canonical history, however, Singh points out that “[b]y 1975, James Ellis, Clifford Cocks, and Malcom Williamson had discovered all the fundamental aspects of public-key cryptography, yet they had to remain silent [since their work was undertaken under the auspices of the British Government and was classified top-secret].”
Singh insists, exactly rightly, that “[a]lthough G[overnment] C[ommunications] H[ead]Q[uarters] were the first to discover public-key cryptography, this should not diminish the achievements of the academics who rediscovered it.” But his next point is the more provocative one, to my mind. “It was the academics who were the first to realize the potential of public-key encryption, and it was they who drove its implementation. Furthermore, it is quite possible that GCHQ would never have revealed their work” at all. For me, then, this appendix to the story of the invention of public-key encryption provides an example of how an open research culture grasped the significance of a discovery and implemented it incomparably more effectively than a closed and controlled, secretive culture managed to do -– even when the subject of that research and of its practical implementation was a matter of the technical facilitation of secrecy itself.
“For a long time,” writes James Boyle in his essay “Foucault in Cyberspace: Surveillance, Sovereignty, and Hard-Wired Censors,” “the internet’s enthusiasts… believed that it would be largely immune from state regulation.... forestalled by the technology of the medium, the geographical distribution of its users and the nature of its content.” Boyle characterizes “[t]his tripartite immunity” as “a kind of Internet Holy Trinity, faith in [which] was a condition of acceptance into the community [of enthusiasts].” Boyle proposes that these “beliefs about the state’s supposed inability to regulate the Internet” stand in an indicative relationship to “a set of political and legal assumptions that [he calls] the jurisprudence of digital libertarianism.” Certainly all three premises of this Internet Trinity exert their force over the imagination of Tim May and the version of digital libertarianism embodied in his crypto anarchy.
But when Boyle alludes to the “technology of the medium” here, he is not referring to the computer-facilitated cryptographic transformation of network communications, but to the more general and ubiquitous protocols and technologies that constitute the internet as such. It is useful to dwell on these more general assumptions about the internet, because they constitute the wider technical and cultural context from which May’s crypto-anarchic case emerges and hence their examination puts us in a better position to understand how some of the Cypherpunks’ otherwise rather improbably apocalyptic conclusions might acquire a compelling veneer of plausibility, especially for many of the Internet’s early partisans and participants.
“The Internet was originally designed to survive a nuclear war,” notes Boyle in “Foucault in Cyberspace.” “[I]ts distributed architecture and its technique of packet switching were built around the problem of getting messages delivered despite blockages, holes and malfunctions.”
“Imagine the poor censor faced with such a system,” he continues. “There is no central exchange to seize and hold; messages actively ‘seek out’ alternative routes so that even if one path is blocked another may open up.” All this amounts to a “civil libertarian's dream.” This is especially so because “[t]he Net offers obvious advantages to the countries, research communities, cultures and companies that use it, but it is extremely hard to control the amount and type of information available; access is like a tap that only has two settings –- ‘off’ and ‘full.’” He concludes the point: “For the Net's devotees, most of whom embrace some variety of libertarianism, the Net's structural resistance to censorship –- or any externally imposed selectivity –- is ‘not a bug but a feature.’”
Although it is true that digital networks have managed to an important and appealing extent to flummox and resist efforts to regulate their content, it is also true, as Boyle is the first to insist himself, that this capacity for resistance is the consequence of particular decisions by coders, engineers, policy-makers, and many others any number of which could have been decided otherwise with considerably different consequences, and which remain to this day more susceptible to alteration than the “civil libertarian’s dream” recited above would seem to credit. The specificity and ongoing contingency of these decisions belies any technological determinism that would destine or commend as an “ought” the architecture of a given social order from the “is” of just which tools happen to prevail for a time here or there.
The “Internet” is, in its somewhat enigmatic classical definition, a network of networks. And so, for example, writes Lawrence Lessig, “[t]he Internet is not the telephone network,” though certainly sometimes “it,” or at any rate some of it, “sometimes [runs] on the telephone lines.” Similarly, the internet is not a cable network, nor a wireless network, while it just as surely partakes of these. Different regulations and architectural protocols govern these many networks the Network networks. The ongoing implementation of the internet is private, public, corporate, governmental, academic.
What people talk about when they talk about the “Internet” tends to consist of expectations they have formed for familiar machines, or of experiences such machines have facilitated for them.
And so, the internet is very likely not the “thing” you think it is. For one thing, whatever you think of it, to be sure the internet is already transforming into something else. Consider, for example, how breathtakingly different would be the experiences of surfing the web from a desktop in 1993, to moblogging via cell in 2003, to immersion in ubicomp environments in 2013 (I am assuming that the omnipresence of these jazz-riffs of kooky jargon will continue to provide one of the few constants among the successive generations of network access and interactivity). What difference does it make that in one generation of its life the internet stood in a literally definitive relation to the Department of Defense, and in another generation more to Amazon.com? What difference does it make that in one generation of its life, a majority of those who access the internet are primary speakers of English, but that in another generation only a minority are? What difference will it make when more non-sentient appliances than human beings routinely access the internet (cars constantly reporting the state of their maintenance to their manufacturer, refrigerators reporting the state of their contents to the grocery store, factories reporting emissions to state regulators, etc.)?