The internet is a child with many fathers. It is an extremely complex multi-module technology and each module—from communication protocols to browsers—has a convoluted history. The internet’s earliest roots lie in the rise of cybernetics during the 1950s. Later breakthroughs included the invention of packet switching in the 1960s, a novel way for transmitting data by breaking it into chunks. Various university and government networks began to appear in the early 1970s, and were interlinked in the 1980s. The first browsers came on line in the early 1990s—20 years ago this August.
Many seemingly unrelated developments in the computer industry played an important role. The idea of personalised, decentralised and playful computing was being advanced by the likes of Apple and Microsoft in the 1970s. In contrast, IBM’s idea of computing was of an expensive, centralised and institutional activity. If this latter view had prevailed, the internet might have never developed beyond email, which would probably have been limited to academics and investment bankers. That your mobile phone moonlights as a computer is not the result of inevitable technological trends, but the outcome of deeply ideological and now almost forgotten struggle between two different visions of computing.
Much of the credit for the technical advances of the internet goes to individuals such as Vint Cerf, creator of the first inter-network protocol, which helped to unify the numerous pre-internet networks; David D Clark, who helped to theorise the “end-to-end” principle, the precursor to the modern concept of “net neutrality”; and Tim Berners-Lee, who invented the world wide web.
But studying the history of the internet is impossible without studying the ideas, biases, and desires of its early cheerleaders, a group distinct from the engineers. This included Stewart Brand, Kevin Kelly, John Perry Barlow, and the crowd that coalesced around Wired magazine after its launch in 1993. They were male, California-based, and had fond memories of the tumultuous hedonism of the 1960s.
These men emphasised the importance of community and shared experiences; they viewed humans as essentially good, influenced by rational deliberation, and tending towards co-operation. Anti-Hobbesian at heart, they viewed the state and its institutions as an obstacle to be overcome—and what better way to transcend them than via cyberspace? Their values had profound effects on the mechanics of the internet, not all of them positive. The proliferation of spam and cybercrime is, in part, the consequence of their failure to predict what might happen as a result of the internet’s open infrastructure. The first spam message dates back to 1978; now, 85 per cent of all email traffic in the world is spam.
Perhaps the cheerleaders’ greatest achievement was in wresting dominance of the internet from the founding engineers, whose mentality was that of the Cold War. These researchers greatly depended on the largesse of the US department of defence and its nervous anticipation of a nuclear exchange with the Soviet Union. The idea of the “virtual community”—the antithesis of Cold War paranoia—was popularised by the writer and thinker Howard Rheingold. The term arose from his experiences with Well.com, an early precursor to Facebook.
But this cyber-boosterism was not without a serious side. Figures such as Nicholas Negroponte, co-founder of the MIT Media Laboratory and the spiritual leader of the “One Laptop per Child” movement, Bill Gates of Microsoft, and Esther Dyson, the commentator and entrepreneur, helped to assure the public that the internet was not just a hangout for Bay Area hippies— it was also a serious place for doing business. And as the cyber-pundits kept promising, it was also a place for “getting empowered,” an attitude that made it a good fit for the broader neoliberal agenda of the 1990s.
This empowerment was supposed to come through the removal of intermediaries. Mainstream media outlets were to be replaced by bulletin boards, e-zines and later by forums and blogs. Elected representatives were to be replaced by “electronic townhalls” and direct online voting. This political aspiration even had its own founding document. Back in 1996, John Perry Barlow, a former Grateful Dead lyricist and one of the founders of the Electronic Frontier Foundation, penned the famous A Declaration of the Independence of Cyberspace. Barlow hoped that the nation state would leave cyberspace alone. (French President Nicolas Sarkozy’s recent pledge to “civilise the internet” suggests that some nation states didn’t get Barlow’s memo.)
Overall, this vision of a world without intermediaries satisfied the communitarian former hippies and the libertarian anti-system cyber-pundits. They both wanted the internet to “flatten” the world, by which they meant level things out—make things fairer. (This was a decade before the author Thomas Friedman stumbled on the same metaphor and wrote his book The World is Flat, on the consequences of globalisation). That former hippies found themselves dining with venture capitalists only seemed to confirm the great bridging potential of the internet. The ex-hippies genuinely believed that all their utopian blueprints could be executed with the help of private capital.
Why the venture capitalists found the internet so appealing is a mystery: the market for online advertising at the time was tiny and the number of internet users negligible. In 1995, there were only 15m users, according to the website Internet World Stats. Start-ups were everywhere, but most were trading in promises of a bright future, not real services. The investors’ disregard for traditional methods of gauging financial performance—which eventually led to the dotcom bubble—suggests that their judgement was clouded by a toxic combination: rhetoric from the internet’s New Age cheerleaders; and neoliberal promises of new ways of doing commerce. Pets.com, which sold pet products to retail customers, is a textbook example. At one point, the website was spending close to $12m on advertising on revenues of $619,000. In 2000, the company collapsed in a heap of debt.
If there was one site that seemed to validate the ethos of the early pioneers—that people are good and, under the right conditions, will co-operate in the name of shared goals—it was Wikipedia. It is also one of the few sites that defied the for-profit model typical of internet start-ups. Wikipedia refuses to show ads or pay contributors. Instead, the site depends on donations from users and grants from foundations. Wikipedia is a painful reminder of what the web could have been had the early vision of the internet as a shared, communal space not been co-opted by big business.
Most internet enterprises had to build their business around advertising, which meant being subject to the trends of that industry—the most important of which is personalisation. Online ads are tailored to the interests of a given user. The more the website knows about a user, the more effective its advertising pitch. A clear picture of a user’s interests will also allow a website to tailor its content. Data from Google News shows that users who see a page with news that was collated on the basis of their previous activity end up clicking on more stories.
The logical end of this ever-increasing personalisation is of each user having his or her own online experience. This is a far cry from the early vision of the internet as a communal space. Instead of the internet, we may as well start talking of a billion “internets”—one for each user. Even the browser, the last bastion of shared experience, is on the way out, replaced by a panoply of apps for mobile phones and tablets such as the iPad that each provide a customised experience. This seems a clear deviation from the original plan.
It is not the only deviation. For many internet users, empowerment was an illusion. They may think they enjoy free access to cool services, but in reality, they are paying for that access with their privacy. Much of our information-sharing seems trivial—should we really care that some company knows what music we like? But, once this information is analysed alongside data from other similar services, it can generate insights about individuals and groups that are deeply interesting to most marketers and intelligence agencies. Based on its extensive data-mining across the web, RapLeaf, a San Francisco start-up, came up with the conclusion that Google’s engineers tend to eat more junk food than Microsoft’s.
If they can find out what you eat, they can find out what you read as well; from there, it’s not so hard to predict your political preferences—and manipulate you. We are careening towards a future where privacy becomes a very expensive commodity. There are already several start-ups providing privacy “at a fee.” Ironically, venture capitalists love these companies, and are busily funding solutions to the very problems they have previously helped to create.
The removal of online material is also a booming industry. For a fee that ranges from $3,000 to $15,000, a company such as Reputation.com can ensure that any sensitive information is buried deep in the last pages of Google’s search results, or disappears from the internet altogether. That company rose to prominence after it removed from the internet hundreds of photos of a Californian teenager who died in a car crash, at the request of the victim’s family. This, too, creates new kinds of inequalities: the maintenance of online reputation is dependent on ability to pay. At this point, the law can intervene, as in Finland, for example, where employers are banned from Googling the names of prospective employees. In Germany too, companies cannot check a potential employee’s social networking sites; but it is unlikely that such measures would take off in countries with weaker employment protection laws.
While we are being empowered as consumers, we are simultaneously being disempowered as citizens, something that the cyber-libertarian digital prophets didn’t foresee. “Electronic town halls” never took off either. When Barack Obama tried to hold one shortly after being elected president, the most popular question posed to him concerned the legalisation of mari-juana. The internet does not and cannot replace politics—it augments and amplifies it. The Tea Party in the US does not limit its activism to social media, but uses it as part of a broader political campaign. Politics is still primary and technology secondary.
However, one set of intermediaries may well be on the decline—print media—which has been quickly jettisoned by the younger generation. Search engines and social networking sites hold as much power today as newspapers and radio stations did three decades ago. The fact that they prefer to disguise their editorial practices in the form of nominally objective algorithms doesn’t make them any less political and influential.
Perhaps the mismatch between digital ideals and reality can be ascribed to the naivety of the technology pundits. But the real problem was that the internet’s early visionaries never translated their aspirations for a shared cyberspace into a set of concrete principles on which online regulation could be constructed. It’s as if they wanted to build an exemplary city on the hill, but never bothered to spell out how to keep it exemplary once it started growing.
Some fundamental questions about the communal aspects of the internet were sidestepped. Who would take out the trash—that is, deal with spamming and scamming? Who would be in charge of preserving historical memorabilia: the ephemeral tweets and blog posts that tend to disappear into the digital void? Who would deal with the problem of pollution—insidious practices such as “search engine optimisation,” or content farms that produce trivial content to earn advertising revenue? Who would protect the dignity of online citizens? Who would secure their privacy and protect them from defamation and libel?
These issues were perhaps not so pressing or evident in a decade when search engines were rudimentary and tweets and blogs didn’t exist. But it’s not so obvious that John Perry Barlow’s call on governments to exit cyberspace was a good one. In the absence of strong public institutions with oversight, corporations felt they could do what they wanted. In most cases, they just pretended these problems didn’t exist.
In the early days of the era of Web 2.0—the second-generation websites which had dynamic, shareable content or were social networks—it seemed that many such problems were imaginary: who needs to preserve tweets and blog posts if they can be easily found online? Now, with well-known services such as Digg, Flickr, and Delicious going through rough times, it’s not a given that your data is safe with them, for they might go under. There is always Google, which keeps a copy of most things—but then, one day it may go under too.
What the internet badly needed in its first two decades of existence, and what it needs still, is a book akin to Jane Jacob’s 1961 The Death and Life of Great American Cities which attacked the practices and attitudes of 1950s US urban planners and proved hugely influential. The structure of online space requires a similar critique.
The founding fathers of the internet had laudable instincts: the utopian vision of the internet as a shared space to maximise communal welfare is a good template to work from. But they got co-opted by big money, and became trapped in the self-empowerment discourse that was just an ideological ruse to conceal the interests of big companies and minimise government intervention.
The current state of affairs is not irreversible. We still have some privacy left and internet companies can still be swayed by smart regulation. But we need to stop thinking of the internet as a marketplace first and a public forum second. What is long overdue is a fundamental reconsideration of the primacy of the internet’s civic and aesthetic dimensions. It’s time to decide whether we want the internet to look like a private mall or a public square.