Archive

Archive for March, 2011

The Google Switch and +1: Search vs Social, Google vs Facebook

Today, according to the press releases now multiplying like bunnies across ‘online news sites’ and major newspapers of record around the world, Google added a social layer to its search functions.  It’s new “+1″ function is the online media behemoth’s response to Facebook’s ubiquitous “Like” function, although Google has been ‘going social’ for a while (e.g. Orkut, YouTube, Blogger).

Why does this matter anyway? For one, it gives us insight into two key functional characteristics of the Internet — search and social. It also sheds insight into Google itself, a subject that has gained increasing attention from academics such as Siva Vaidhyanathan (the Googlization of Everything), the media savvy journalist Ken Auletta (Googled) and staunch ‘new digital media’ sage and Google defender, Jeff Jarvis (What Would Google Do). Google is big business and what it does matters.

Google is increasingly competing for advertising revenue and Internet users’ time and attention with Facebook. Indeed, a shrinking number of digital media giants are battling for control the time that people spend online. This is especially important in Canada because Canadians, according to Comscore’s 2010 Canada Digital Year in Review, are the heaviest Internet users in the world spending, on average, 43.5 hours per month — greater than the 33 or 36 hours spent online per person in the U.S. and Korea, respectively (see p. 6).

In some ways, +1 is just another addition to Google’s ballooning suite of functions: search, gmail, Google Books,Blogger, Docs, browsers (Chrome), video (YouTube), operating systems (Android). The aim is to grab more of users’ time and to put more and more of the Internet ‘in the cloud’. In this case, it is Google’s cloud.

More services provides more reason for people to stick around with Google, rather than just the typical ‘search and run’ mode.  Moving more services off the desktop and in the cloud also keeps people connected more often, in more places, and for longer periods of time overall. This has largely worked from Google’s point of view.

This is why Google dominates online advertising markets and search — accounting for between 75 and 95 percent of all searches — in every country, except Russia, China, Taiwan, Japan and Korea. The power of search as a general utility, and Google’s role as the leading provider of this utility, is also growing in lockstep with the rapid growth in smart phones and the mobile Internet, as a recent Goldman Sach’s presentation shows. Overall, search dominates social in the ‘mobile Internet’ and Google’s grip with respect to search functionality is typically growing across the board.

There are a host of things that we should rightfully be concerned about by Google’s dominance of Internet search functionality and as things migrate from the devices and desktops in our own home’s to someone else’s cloud (read my earlier post on ‘social media and memory ownership‘).  Siva Vaidhyanathan has recently provided an extensive argument, in the Googlization of Everything, for why we should care. Some of his arguments are very thought-provoking. They go well beyond just the issue being discussed here. Here is a video clip of him responding to the question whether Google is a monopoly.

I don’t find Vaidhyanathan’s account quite as good as he seems to think it is, or quite as deserving of the praise showered upon it by some reviews. Some have trashed it, as the libertarian technophiles at the Technology Liberation Front did, but I definitely don’t think that their dismissal is right either.

Google is shaping the architecture of the Internet and the digitization of the media industries across the board. To think otherwise is to have one’s head buried in the sand. To think through why it’s important, however, is another matter.

Google so far has provided primarily utilitarian functions: search, scan, link, store. However, its not these functions, but rather the ‘social web’ and social networking sites — Facebook in particular — that are growing fast. Time spent on SNS sites overtook email in late-2007, according to the Goldman Sachs’ presentation referred to earlier. Will search be the next ‘killer app’ to fall?

The drift from search to social means that Google is increasingly competing with entities like Facebook — for users, for capital investment, for advertising revenues. In terms of the number of unique Internet page views per month, the yawning chasm that once stood between Google and Facebook has steadily closed (see here).

It is from within this context that we can understand why Google’s vast ambitions to colonize every nook and cranny of cyberspace/network media space now include adding “a thicker social layer” to its offerings. Always count on the tech-heads and marketing mavens to come up with a good line to summarize what’s going on. Here’s Dave Karnstedt, CEO of Efficient Frontier, in an March 30, 2011 Advertising Age article: “Injecting a social layer into the algorithmic search is key to relevance.” Translation: friends and family are key to Google’s bottom line.

Google’s ambitions have been seen as competing with and forcing traditional media to adopt new methods for the music, television, film and news industries for some time now. Now, competing with Facebook for the new coin of the realm — user attention — can be added to the list. There are several important dimensions of this.

The shifting balance between search and social needs to be seen in the context of competition for advertising revenues. Internet advertising growth is and has been explosive, growing globally from roughly $16b in 1998 to $66.2b in 2010. While online advertising has indeed grown very fast, we need to bear in mind several things.

First, recent trends will not not continue forever because, yes, like the ‘real world’, advertising spending online is subject to the ‘normal laws’ of capitalism.

Second, Internet advertising is tiny in comparison to Internet Access market, the pipes and ISPs that run the infrastructure that get content from one place to another: worldwide, the Internet access market in 2010 was worth $247.5b, or four times the size of online advertising market. It is smaller yet than the global television market: $351.3 billion in 2010.

Third, like the rest of the media economy, online advertising is also highly susceptible to swings in the macroeconomy. Like almost every other sector, except movies, Internet advertising fell in 2008 and 2009 amidst the global financial crisis attests.

Fourth, attention online has become more and more concentrated. According to figures cited by Wired, the top 10 websites in the US in 2001 accounted for by 31% of all page views. By 2006, the number had grown to 40 percent. In 2010, the top 10 accounted for roughly three quarters of all page views.

Online advertising is also very concentrated. With revenues of roughly $30 billion in 2010, 97 percent of which come from advertising, Google dominates the global online advertising revenue (e.g. accounting for 44% of the total $66.2 billion in online revenue in 2010).

In other words, while the galaxy of websites, blogs, information sources, uses, etc. continues to grow, Google and Facebook are becoming bigger constellations within the overall Internet universe. Growing concentration is sharpening the struggle between search and social, or between Google and Facebook — at least for the time being.

Google’s attempts to insert itself at the cross-roads of the emerging network media ecology have also upset many interests in the news business, book publishers and authors, as well as those in the television, film and music business. Many of these groups are pushing hard to leverage Google’s dominant position at the cross-roads of search as a major chokepoint for intercepting illicit downloads (see my earlier Rogues, Pirates and Bandits and Goliath vs. Goliath posts).

Cleavages between Google and the traditional media industries also means that its attempt to launch Google TV has met with mixed results. The fact that it has ruffled so many feathers is not unconnected to this.

The significant tensions between Google and other elements of the traditional media also plays into the competition between Google and Facebook.  This point was underscored last week when Time Warner chose Facebook to distribute  the latest Batman sequel, Dark Knight. For $3 a shot, Facebook subscribers can now download Dark Knight from the SNS. Does this mean that Facebook and other SNS will become significant new distribution channel for traditional media?

With its own ambitions for Google TV already having a hard time getting off the ground, Google now faces  the prospect of competing with Facebook for a role in the online television and movie distribution business, alongside AppleTV, Netflix and the incumbent media conglomerates’ own ‘over-the-top’ offerings such as Hulu. Add to this that search is a utility, while social networks function as the electronic watercooler of the digital network media, and we can see why Google is scrambling to make the shift from search to social.

Popular entertainment has always relied heavily on massive marketing and word of mouth, with the latter point being given ‘scientific heft’ by classic studies in communication by Lazarsfeld & Katz in the late-1940s and early-1950s. Today, the links in the two-step flow that they identified in ‘small town America’ have been digitized and commoditized writ large.

As Christian Fuchs, Mark Andrejevic, Elizabeth van Couvering, and a few others show, search and social functions are fundamentally intertwined in the production of the audience commodity and the organization of audience attention. They are also crucial to organizing the vast quantities of user created content (UCC) that underpins the digital media economy.

Together, these functions are crucial to the digital media economy and the sky high market capitalization of entities such as Google and Facebook. However, with social already putting the Internet’s first ‘killer app’ — email — in a downward spin, we can ask: is search (and Google) next? Or, will these two functions themselves converge?

The Canadian Pay-Per Internet Model — Update

This is a quick note summarizing a few recent adn ongoing developments over Usage Based Billing in Canada, or what I have called the transition to from the Open, User-Centric Internet to the Pay-Per Internet Model.

As I’ve indicated in previous posts, the CRTC triggered a firestorm of protest with its now infamous UBB decision of January 25, 2011. Among other things, in quick order, the decision spawned an online ‘stop-the-meter’ campaign by Open Media.ca that soon garnered nearly half-a-million signatures, mostly by Canadians who seem to have mistakenly believed that the metered Internet was about to be imposed in Canada for the first time. A House of Commons Standing Commitee on Industry, Science and Technology was also called in early February to look into the matter.

The CRTC offered some hope that it might turn the tide when it stepped in on February 8 to announce that it would revisit the matter. Within a month, however, any hope for a far-ranging review were dashed.  The focus, the CRTC declared, would not be on the steps that it and the telecom and cable companies had implemented steadily, even if stealthily, over the past decade, and with much added momentum in the past five, that have led to the near universal adoption of the pay-per Internet model in Canada.

According to the CRTC, the Internet access market in Canada is competitive and just fine. Its review will be strictly limited to the wholesale markets that small ISPs depend upon for survival and the January 25th UBB decision. For Internet users, this meant that perhaps 5 percent might be affected; for the other 95 percent, this arcane process would be irrelevant.

This past Monday was the deadline for those wanting to participate in the upcoming hearings to file their interventions. Bell seemed to steal the show with its proposal to withdraw the UBB model for wholesale Internet access services that got us to this place to begin with. Instead, it would offer a new model, one that it called the Aggregate Volume Pricing Model, or AVP for short.

Michael Geist and Ian Marlow in the Globe & Mail have already offered good reviews of the new idea.  Geist has also published a new comparative international study on Internet user fees and bandwidth that shows, among other things, that Canada is pretty much alone “in the world where virtually all providers utilize some form of UBB” (Geist on UBB).  As I’ve indicated in earlier posts, Bell led the way by adopting bandwidth caps and so-called excess user fees in late-2006, and the rest of the ‘big six’ — Rogers, Shaw, Telus, Videotron and Cogeco — quickly followed suit. As Geist shows, some other providers in some other countries have adopted similar measures, but nowhere are such practices the norm.

Here’s the key things to note about Bell’s new proposal to replace the wholesale UBB model with what it calls the AVP model:

First, it does nothing to change the pricing or use of the pay-per model and bandwidth caps for 95 percent of Internet users in Canada. In other words, so-called excess usage charges of between $2 and $5 per GB remain intact for the overwhelming majority of Canadians.

Second, for small ISPs (Teksavvy, Primus, etc.) that serve the other 5 percent, and rely on Bell and the other big cable and telecom companies for ‘last mile’ access services, it does mark an advance. It reduces the rates for gateway access services to $200 per Terabit (TB), or about  .20 per GB. It also implements these charges on an ‘aggregate level’ versus a per user model, allowing smaller ISPs some room to carve out their own business models, ie. unlimited, preset caps and excess usage charges, etc.

Third, it retains the ‘excess usage charge’, but at roughly .30 per GB this is much lower than the prices indicated above and previous proposals. The idea that bandwidth is pooled allows the small ISPs to decide for themselves how to deal with ‘too much’ Internet use.

Overall, the proposal offers something. It is a reversal of sorts, but one that applies to a very small segment of Internet users. Together with the CRTC’s refusal to take a fulsome look at the issues, Bell’s proposal delivers a double blow to those who want a wide ranging review of measures and practices that have steadily tilted the open, user-centric model to the pay-per, provider controlled model in Canada.

The insistence on tightly focused remedial action by Bell and the regulator should also remind us that there are two basic problems standing in the way of a more open Internet. First, a profound lack of competition and highly concentrated media, telecom and Internet access markets in Canada relative to historical and global standards and, second, a compliant regulator that have sanctioned this state of affairs almost every step of the way.

Categories: Internet

4 Phases of Internet Development: From the Open to the Contested Internet

I’ve just come across what looks like a very interesting article by John Palfrey, a Harvard Law School Professor. You can find the article here.

Here’s the basic gist of the article, in his words:

The four phases of Internet regulation are the “open Internet” period, from the network’s formation through about 2000; “access denied,” through about 2005; “access controlled,” through the present day (2010); and “access contested,” the phase into which we are entering.

The paper draws on a decade of interdisciplinary work conducted by members of the Open Net Initiative, a group that consists of researchers who I have long thought have been doing some of the best work on the topic at the Citizen Lab at the Munk Centre, University of Toronto (Prof. Ron Deibert, principal investigator), the SecDev Group (Rafal Rohozinski), and the Berkman Center (Palfrey and Jonathan Zittrain).

The Political Economies of Media: New Book by Winseck and Jin

Well, here’s a little bit of shameless self-promotion.  It’s the front cover of a new co-edited collection that I’ve put together with Dal Yong Jin, an assistant professor at the School of Communication, Simon Fraser University in Vancouver, Canada as well as the College of Culture and Technology, Korea Advanced Institute of Science and Technology (KAIST).

The book is called The Political Economies of Media and will be published by Bloomsbury Academic — the academic publishing arm of the same company behind the Harry Potter series — in June.  I think the cover looks great. The authors that have contributed to this volume are exceptional as well: Bernard Miege, Susan Christopherson, Terry Flew, Amelia Arsenault, Guillermo Mastrini, Martín Becerra, Dwayne Winseck, Elizabeth van Couvering, Dal Yong Jin, Christian Fuchs, Aeron Davis, Peter Thompson, Marc-Andre Pigeon.

You can read sample chapters here by myself, Aeron Davis and Christian Fuchs here.

Secret Surveillance and Hereditary Kings: Putting a Check on Unlimited Network Surveillance

On Monday (March 23) a Second Circuit Court of Appeal in New York reinstated a lawsuit by civil liberties and human rights groups, journalists, media organizations, labour unions and others who argue that Internet, telephone and other electronic communication surveillance in the U.S. violates Constitutionally protected rights to privacy and freedom of expression.  The gist of the case is that the groups do have standing even though they are unable to prove whether or not their communications are actually under surveillance or not.

The case is a continuation of running attempts over the past five years to reign in claims that the President has unchecked powers to authorize the National Security Agency (NSA) to spy on the electronic communications of Americans.  The process was first brought into the light of the day in December 2005 by New York Times’ reporters James Risen and Eric Lichtenblau. However, even then Risen and Lichtenblau’s coverage had been held back for a year because of the NYT’s deference to Bush Administration assertions that publication threatened national security (see mea culpa by NYT public editor Byron Calame, Jan. 1, 2006).

Despite being found to run afoul of existing law and the Constitution (see below), nobody ever put a stake through the heart of the Bush Administration’s illegal warrantless surveillance program. Instead, it has been continued by the Obama administration and given a retroactive legal footing with the 2008 Foreign Intelligence Surveillance Amendments Act. Consequently, the electronic surveillance of communications of Americans making international phone calls and using the internet to correspond with others outside the country is likely still alive and well, complete with secret data rooms and dedicated network connections linking all of the major U.S. telecom companies main switching centres to the NSA.

For those interested in a fuller treatment of the issues involved up until late 2007, I published an article in the International Communication Gazette in 2008.  You can find it here.

In its original form, the NSA’s warrantless electronic surveillance programme was authorized by President Bush on the pretext that he could do so using the claim that wartime presidents have virtually unlimited powers to do whatever it takes to prosecute a war. And we must remember that the Bush Administration used 9/11 to unleash a global war on terror that knows no set limits either in terms of how long it will last or where it will take place. Putting the two together — unbound powers of Wartime Presidents and war without end — the Bush Administration made unbound claims that it could it could do as it pleased, including authorizing electronic surveillance outside the normal process established by law of judicial review by the Foreign Intelligence Review Courts.

Sometime shortly after 9/11, the NSA began tapping into the telecom networks and switching hubs of AT&T, Verizon and most other big US telecoms firms (except, to its credit, Qwest) to eavesdrops on telephone, email and Internet communications between people in the US and elsewhere in the world. The program targeted up to 500 people at any one time and thousands overall in a bid  to monitor the electronic communications of people suspected of having ties to Al-Qaeda and other terrorist groups, and thus to pre-empt terrorist plots.

The two major cases dealing with these issues — Hepting v. AT&T and ACLU v. NSA — are replete with sections of the government’s case ‘blacked out’ on account of unspecified claims of national security. The cases also take on a Kafkesque tone with the Government’s claims that it was impossible to proceed with the cases at all because doing so would reveal the existence of ‘state secrets’.  And without being able to discuss the matters, well, the people involved couldn’t prove anything.

Over and against the administration, stood those representing journalists, academics, writers and lawyers who argued that they had been illegally caught up in the electronic drag-net because of their work involving Muslims living abroad. The president lacked authority, they stated, under the AUMF, the Constitution or any law to create the secret programme. Carolyn Jewel, a writer of futuristic action and romance novels, claimed that the surveillance programme made it impossible for her to talk ‘openly about Islam or US foreign policy in emails to a Muslim individual in Indonesia and that she could no longer use the Internet as part of her research.

In the ACLU v. NSA case, Judge Anna Diggs Taylor was blunt in her decision: the surveillance program was illegal and unconstitutional. She further argued that the claims before the court were not speculative and general, but ‘distinct, palpable, and substantial’ (ACLU et al. v. NSA et al., 2006: 22). The activities, she stated, crippled plaintiffs’ ‘ability to report the news and … to effectively represent their clients’ (ACLU et al. v. NSA et al., 2006: 20).

In exceptionally strong language, she disparaged Bush’s claims that his authority stemmed from the ‘inherent powers’ clause of the Constitution or the Authorization of Use of Military Force — a law hastily passed within days of 9/11 (ACLU et al. v. NSA et al., 2006: 33–41). To these claims of unfettered authority, Taylor sharply retorted: ‘There are no hereditary Kings in America’ (ACLU et al. v. NSA et al., 2006: 40).

The administration withdrew for the next six months, but in January 2007 it announced that the surveillance project would continue, but only after warrants were obtained according to the rules of the Foreign Intelligence Surveillance Act and the Foreign Intelligence Review Court. In other words, the Bush Administration would follow the law.

Even that, however, was not enough. On July 10 2008, the Foreign Intelligence Surveillance Act was changed to, essentially, make legal what was previously illegal. Just as importantly, the new law granted telecoms companies such as AT&T, Verizon, Sprint, etc. immunity from prosecution, either for their activities in the past or in the future.  In other words, U.S. telecoms companies got a free pass despite the fact that they were, by court decision, acting in concert with the government in ways that were beyond the pale of either the Constitution or the law.

The decision on March 21, 2011 by the NY Second Circuit of Appeals is the next phase in this process. In many ways it was a rehash of issues that have already played out in the past, but with the crucial distinction that the ACLU and the others involved now have the new Foreign Intelligence Surveillance Act in their sights. If successful, the sections of the Act granting extensive actions to the Executive to authorize surveillance and for such activities to be conducted outside of formal processes of judicial review could fall on the grounds that they are unconstitutional.

One of the travesty’s of the current case is that the Obama Administration has simply carried through with the precedents set by Bush.  This is another major blemish on the Obama Admin’s original claims to establish some clear blue water between itself and its predecessor.

Thus, in the current case, many of the same players are involved, with the Executive, NSA and telecoms companies lined up on one side against journalists, media organizations, minority (e.g. read Muslim) groups, and civil rights groups, on the other. And again, claims are offered by the former that to even discuss the matter would be to reveal ‘State Secrets’ — a catch-all maneouvre that seeks to stop things dead in their tracks before they even get started by ruling that any kind of discussion of the matter is, simply, off-limits because of the wide ranging powers of the President that are in dispute.

And similar, too, are comments by journalists such as Noami Klein and media organizations such as the  leftish magazine that has been around since the 1865, The Nation — the oldest weekly magazine in the U.S. – that the spectre of unbound surveillance has a ‘chilling effect’ on free speech and freedom of the press.

As Naomi Klein stated in the Globe & Mail piece today, “The issue is that we think that the activities that we do could fall under these broad definitions”. When asked whether she herself was the target of such surveillance, Klein responded, “I have no idea whether they are or they aren’t”.

And that’s the point: the extraordinary powers and secrecy granted to ‘wartime presidents’ makes it impossible to penetrate the veil of ‘State Secrets’ and to know just where one stands. As a result, speech is chilled, the free press trumped by unchecked powers of the State, and privacy turned into a poor shadow of itself.

The decision on Monday by the New York Appeals Court is to be applauded. As the decision to go ahead with this legal challenge states, those pressing the case do not have to show that they are actually under surveillance, because given the broad claims of the national security agencies and the President this would be impossible to prove. It is enough, as the court state, that “allowing the executive branch sweeping and virtually unregulated authority to monitor the international communications . . . of law-abiding U.S. citizens and residents”, at least on the surface, appear to be an affront to the Constitutional protections of free speech and the free press, privacy as  well as the restraints that aim to prevent presidents, whether Bush or Obama, from acting like, to use Judge Anna Diggs Taylor’s words, “hereditary kings”.

This is a topic that, for Canadians, we also need to examine. This because are own Prime Minister Harper often appears to have torn a page from the Bush Administration’s playbook and sets himself up as an authoritative leader. As a wartime Prime Minister, just what kind of electronic network surveillance has been authorized in Canada?  And to what extent have the telecoms companies gone along with them?

From WWI onwards, the fact that trans-Atlantic cables linking not just Canada, but the U.S. as well, to Europe and the rest of the world have run too and from Nova Scotia and Newfoundland have made them an integral part of the Euro-American surveillance system. It is unlikely that this is still not the case today, although someone needs to take up the challenge of doing the digging to find out.

CRTC Just Says No

Last week I asked a few contacts at the CRTC if they could get me invited to the Industry-Regulator Conflab this coming Thursday (March 24).  You know, the behind-closed door, by-invitation-only meeting organized by the CRTC to talk with the industry about how it should deal with evolving media markets and the major players with a stake in the game. The digital media universe is all topsy turvy and the regulator’s on a search for what to do.

Sounds interesting. I know some people, experts, are coming in from NY to talk to the hand-picked few allowed to attend. Luckily, OpenMedia.ca was able to wrangle an invite too.  Certain officials, notably Industry Minister Clement, appear to be being particularly solicitous towards the organization as of late, with their 500,000 ‘stop the meter’ petition opening the Minister’s eyes — even if opportunistically and for the briefest of moments.

For me, no such lucks.  Can’t go, no room said my contacts.

The industry folk will be there too, mostly from the telecoms and broadcasting sectors, but there’ll no doubt be some ‘new kids’ on the block there, too: Neflix, the Google guys, Apple TV. They’ve been playing a more significant role in telecom and media affairs lately (See, for example, interventions by Google and Apple TV). They’ve been picking off a few regulatory bodies, too, for their own team. The CRTC is also juicing its own ranks with a few people pilfered from private consultancies.

I’ll speak more on this in a day or two, preferably before the meeting itself is held.  But here’s a copy of the ‘invitational document’ sent by the CRTC to participants.  CRTCRegForum_2011

The point that I want to close with now, however, is that up until 1976 in Canada, the primary criteria for inclusion in a regulatory proceeding was that one had to have a business or economic stake in the game. That criteria, however, was thrown out as an affront to democracy and the public interest, just like it was ten years earlier in the United States. Today it is back.

Last week the CRTC made it clear that on key issues such as Usage Based Billing, it is not willing to open up the long path that got us to bandwidth caps and the pay-per Internet model. Nope, it’ll be a narrow review of the January 25th UBB decision. Note, too, the conspicuous silence of Tony Clement on this issue. After grandstanding during the initial swell of outrage and throwing a few bones to the crowd and burnishing some egos, he’s gone silent.  After all, letting the market rip leads to just such an outcome.

This Thursday’s conflab will not change the current direction of events. That is unfortunate.

Cassandra’s and Copyright: Creative Destruction and Digital Media Industries

A new study released yesterday on peer-to-peer content sharing and copyright in the United Kingdom, Creative Destruction and Copyright Protection, provides a further challenge to those who claim that strong new measures are needed to make sure that swapping digital content online does not damage the bottom line of the media and entertainment industries. The study was co-authored by London School of Economics and Political Science Professors Bart Cammaerts and Bingchun Meng.

It is a part of several steps being taken in the U.K. that challenge last year’s hastily passed Digital Economy Act. The bill became law after only two hours of debate in the House of Commons and is a real gift to the media and  entertainment industries and the various lobby groups that represent them: e.g. the International Federation of the Phonographic Industry (IFPI), its British counterpart, the British Phonographic Industry Association, the Recording Industry Association of America (RIAA), Motion Picture Association (MPA), and so on.

Among other things, the Act turns Internet Service Providers into agents of the media and entertainment industries. Upon notification, ISPs must send a warning notice to suspected copyright infringers and if that does not work they can be directed by the Secretary of State to disconnect the offending user.

As the IFPI noted in its latest Digital Music Report, it has been pushing for such measures around the world in the past couple of years. Indeed, this push supersedes the emphasis earlier in the decade for DRM (digital rights management technologies).  The IFPI has chalked up several ‘wins’ for this approach in the UK, France, Sweden, South Korea, Taiwan, and a few others (see pp. 25-27).

Two of the biggest ISPs — BT and Talk Talk — in the UK have not taken these requirements lying down. They have launched a legal challenge that will be heard this week by the UK High Court of Justice on the ground that the Digital Economy Act’s requirements amount to overkill.

Cammeart and Meng are clear that P2P technologies should be encouraged rather than discouraged. In contrast, the Digital Economy Act stifles innovation and attempts to shore up faltering traditional business models. The message of this report, in other words, is that governments are not in the ‘business model’ protection racket. However, as I have written in earlier posts, that they are in just such a business is also evident in Canada, where Usage Based Billing is clearly linked with attempts to protect the cable and telephone companies forays into the online video business by hamstringing would-be rivals such as Netflix, Apple TV, even Youtube.

In contrast to the current approach, the authors and various people interviewed for the study suggest a significantly different approach. Thus, as one of the report’s authors, Bart Cammaerts states,

“The music industry and artists should innovate and actively reconnect with their sharing fans rather than treat them as criminals. They should acknowledge that there are also other reasons for its relative decline beyond the sharing of copyright protected content, not least the rising costs of live performances and other leisure services to the detriment of leisure goods. Alternative sources of income generation for artists should be considered instead of actively monitoring the online behaviour of UK citizens.”

Early in the report, they also quote from Ed O’Brian from the band Radiohead, who had the following to say:

“We disagree with the industry on what should be done with the persistent file-sharers. The industry has said we will suspend their internet accounts. But you can’t just do that, it isn’t possible and neither feasible. The kind of technical measures that are required to implement this get you into dodgy areas such as civil liberties, tracker software and the second thing is that it costs a lot of money to do this, and even if you do it, you are going to drive a lot of people underground into darknets. Our problem is how do you differentiate between a serial infringer and someone who does it in the spirit of discovery” (Ed O’Brian from Radiohead on BBC, 22/09/2009).
My only real criticism of this report is that the authors take the IPFI’s data on the drastic decline in sale of recorded music at face value, but attempt to offset it by pointing to changing patterns of music consumption, falling disposable household income and the rise of online digital platforms. Their points are well-taken.
Indeed, income levels in western capitalist democracies, including Canada, have largely stagnated for the past 30 years, while wealth has concentrated at the top. To this, we can also had the decline in ‘liesure time’ over the same period, as the historical tendency for the workday to shorten was reversed, resulting in people spending greater and greater amounts of time at work. It doesn’t take a genius to understand that less time and money erodes media consumption.
Such trends run exactly counter to the massive rise in both income and ‘liesure time’ that gave rise to the media and entertainment industries between 1870 and 1945, as Gerben Bakker exhaustively illustrates in his 2009 book Entertainment Industrialized.
These points are indeed important, but I would add another that I think is even more important: namely, that taking into account all sources of income, the music industry has not contracted, but expanded greatly since the late-1990s, precisely alongside the massive popularization of the Internet. In order to understand that, we need to focus not just on the sale of ‘recorded music’ and ‘online revenues’, but also publishing royalties and, crucially, live entertainment. When we do that, as I showed in another post last week, the music industries have expanded greatly.
Here’s the data showing, first, the drastic decline in the sale of recorded music, followed by the full picture:
Figure 1: Worldwide ‘Recorded Music Industry’ Revenues, 1998 – 2010 (US$ Mill.)


Source: Source: PWC (2010; 2009; 2003), Global Entertainment and Media Outlook

Clearly, just on the basis of recorded music sales, the music industry is in dire shape indeed. However, things look decidedly different once we take a look at the full picture, as the following figure does.

Figure 2: Worldwide ‘Total Music Industry’ Revenues, 1998 – 2010 (US$ Mill.)


Sources: PWC (2010; 2009; 2003), Global Entertainment and Media Outlook and IDATE (2009). DigiWorld Yearbook.

The top line shows the picture: a sharp increase in total revenues. Against declining revenues for recorded music, each of the other segments has risen considerably: Internet/mobile; publishing and concerts. Cammaerts and Meng do an excellent job showing the rise of digital revene

Rogues, Pirates and Bandwidth Bandits

Yesterday was yet another day in which the struggle over copyright seemed to be going on at a feverish pitch.

In the U.S., hearings before the House of Representatives Committee on the Judiciary Subcommittee on Intellectual Property, Competition, and the Internet provocatively pitted Internet investment and commerce against pirates and parasites. Daniel Castro from the supposedly ‘non-partisan’ Information Technology and Innovation Foundation (ITIF) tried to set the tone by describing “the impact of parasitic websites” as “an economic leech on the Internet economy”.

Castro set out the costs to various industries, and they were, if he’s correct, staggering:

  • the U.S. motion picture, sound recording, business software, and entertainment software/video game industries lost an estimated $20 billion dollars in 2005 due to piracy;
  • the U.S. recording industry and related alone lost industries lost over $5 billion altogether and 12,000 jobs in the sound recording industry alone, according to estimates by the music industry trade group, the International Federation of the Phonographic Industry (IFPI);
  • the U.S. motion picture industry, by one estimate, lost $6.1 billion to piracy, which resulted in either the elimination or prevention of 46,597 jobs in the film industry.

This is indeed dire stuff (if true).  Dire stuff also requires drastic measures. Here’s some of the drastic measures Castro put on his wish-list:

  • cooperation between the federal government and business to identify “rogue” sites around the world;
  • require ISPs to combat piracy by blocking websites that offer pirated content;
  • encourage bandwidth and usage caps that discourage online piracy;
  • require search engines to remove links to websites that facilitate piracy;
  • require advertisers and financial intermediariers (e.g. Paypal, Visa, Mastercard, etc.) to stop doing business with ‘illegal websites’;
  • further private/government cooperation around development, promotion and adoption of anti-piracy technology, including ‘deep packet inspection’ (DPI) by ISPs.

This is essentially a recipe to impose a lockdown on digitally networked media.  It makes a mockery of the separation between state and media demanded by ‘free press’ traditions. But rather than government nefariously interfering with the media, in this scenario, the state is called on to act as the tool of the media industries. Proposals to seize the domain names of rogue sites, cut them off from ISPs and payments, and so on threatens to balkanize the Internet further as nation-States assert their ‘sovereign authority’ over whatever slice of cyberspace they deem necessary to pursue ‘rogue pirates’ (in the US and elsewhere) or to suppress dissident voices and the free access to information elsewhere (Egypt, China, Iran, etc.).

For Canadians, the emphasis of putting ISPs in the role of gate-keepers and promoting the use of UBB and bandwidth caps to thwart would be bandwidth bandits adds another layer to the ongoing debate over these issues in Canada.

But what about these claims about dire losses?  They are mostly a product of cherry-picking data to support foregone conclusions. As my post earlier today showed, worldwide box office revenues for the movie industry are up, not down, from roughly $25 billion to $32 billion over the past five years. And that’s just the half of it, with total worldwide film revenues from all sources up from about $46.5 billion in 1998 to $87.4 billion last year.

Rather than being under assault, as Castro and others would like us to believe, the vast expansion of the film industry is not surprising. This is not surprising given the massive growth in global media markets generally, particularly in China, Brazil, Russia, and India.

This is also not surprising given the vast number of new media channels and distribution platforms.  Note the huge difference between total revenues versus just box office revenues, i.e. $87.4 billion versus $32 billion.  That $55 billion gap between the two is the space occupied by new media technologies. These are basically new media markets.

DVDs and the corner video shop may be going the way of the Dodo bird, but cable and satellite channels have doubled, according to the OECD, from 600 to 1200 channels worldwide over the past decade.  Add to this pay-per view, video-on-demand, streaming internet video (Hulu, Daily Motion, YouTube, etc.) as well as digital download and subscription services (Apple iTunes, Netflix, BBC’s iPlayer, mobile smartphones, etc.), and the vast expansion of the global media economy comes clearly into view.

Let’s look at the music industry. Sure, if we take a tiny slice, say just the ‘recorded music industry revenues’, and let it stand for the whole, than things look bad indeed.  Just how bad is shown in the Figure below:

Figure 1: Worldwide ‘Recorded Music Industry’ Revenues, 1998 – 2010 (US$ Mill.)

Source: PWC (2010; 2009; 2003), Global Entertainment and Media Outlook

Seen from just this angle, things are bad. the sale of “recorded music” (i.e. cds, vinyl, cassettes, etc.) has plunged by nearly half since 2004.  These are the figures that Castro and his preferred source, the International Federation of the Phonographic Industry (IFPI), point to in order to paint their ‘sky is falling’ scenario. It is also the backbone of their efforts to push through an egregious revamping of digital media in the service of the ‘traditional media’, a set of efforts that would likely never see the light of day were it not for the superficial persuasiveness of the case made.

The problem with the case, however, is that it takes the worst part of the entire music business and lets it stand for the whole. A decidedly different view emerges once we take the blinkers off that Castro, the IFPI, MPAA, etc. would like us to where and look at the whole picture. The whole picture doesn’t just look at ‘recorded music’, but concerts, publishing and copyright revenues, Internet and mobile phones.

When we do that, here’s what things look like:

Figure 2: Worldwide ‘Total Music Industry’ Revenues, 1998 – 2010 (US$ Mill.)

Sources: PWC (2010; 2009; 2003), Global Entertainment and Media Outlook and IDATE (2009).  DigiWorld Yearbook.

The fact of the matter is, these trends are similar across almost all of the media industries from television, film, music, radio, magazines, book publishing, Internet access and Internet advertising, except with a partial and heavily qualified exception for newspapers.

The media, as I have said repeatedly before, are not in crisis.  Thinking otherwise only gives the likes of Castro and the lobbying groups of the traditional media a blank cheque to push an agenda that ought to be stopped dead in its tracks.

Thankfully, there are other sources who see things from a broader point of view. Thus, over and against Castro, take a look at the much more interesting presentation of David Sohn from the Centre for Technology and Democracy yesterday before the same committee in Washington. Or take a look at the paper published by the Research Institute of Economy, Trade and Industry in Japan that was released last month. Looking at the impact of files-haring and YouTube on the sale and rental of Japanese animated television programs, the author concluded that:

  • Youtube “does not negatively affect DVD rentals” and appears to “help raise DVD sales,’
  • “file sharing negatively affects DVD rentals, [but] it does not affect DVD sales.”
  • Youtube’s effect of boosting DVD sales can be seen after the TV’s broadcasting of the series has concluded (the ‘electronic water-cooler’ effect);
  • YouTube can be interpreted as a promotion tool for DVD sales.

Repeat after me: the sky is not falling; new media are not bad media; we must be careful because the doomsday sayers, more often than not, would like nothing more than to throttle the hell out of digital media. That would not just be dangerous for the media economy and technology, but for democracy, how we socialize and communicate with one another, and for an open and creative culture overall.

Movies and Money

Here’s a little something to think about in terms of the ‘old’ vs. ‘new’ media: the movie business is putting more bums in seats than ever.

If there was ever a case when an old medium was going to be decimated by the new, you might think that one that emerged in the 1890s would be a good candidate for extinction. However, as one of my mentors and teachers Janet Wasko once said, each new audio-visual medium has typically opened up a new market for the major Hollywood studios and other film distributors.

This was a lesson she’d drawn from her research in the 1970s and 1980s and which she told me and my classmates about in the early 1990s.  Hmm, maybe everything has changed since then because of digitization and the rise of the Internet?

Not really.  A couple of things illustrate the point.

First, take a look at the Motion Picture Association of America’s (MPAA), the trade group representing the ‘big six’ Hollywood majors and their affiliates:  Time Warner, Disney, Universal, Paramount, Fox and Sony. In its most recent report on the subject, the MPAA shows that worldwide box office revenues were at an all time high in 2010, at $31.8 billion (USD).

Source: MPAA (2011). Theatrical Market Statistics.

The fact that box office revenues climbed from $26.3 billion to just under $32 billion between 2007 and 2010 amidst the global financial crisis and ensuing economic downturn is also impressive, and shows the resilience of the movie business in the face of economic hard times.

And this is only half the matter, actually a little less than half the matter, when we open our eyes wider to look at all revenue sources for the film industry, including pay-per view tv, cable and satellite channels, video rentals, rapidly declining dvd sales, and fast rising new areas such as online subscriptions and digital downloads. Doing that, it is clear that the movie business is doing even better than the box office numbers suggest, rising sharply on worldwide basis from $46.5 billion in 1998 to $87.4 billion in 2010. Table 1 below shows the trend.

Table 1:  Worldwide Film Industry Revenues, 1998 – 2010 (US$ Millions)

1998 2000 2004 2008 2009 2010* Change %
Film 46,484 52,803 82,834 82,619 85,137 87,385 + 88%

Sources: PriceWaterhouseCoopers (2010; 2009; 2003), Global Entertainment and Media Outlook.

The only thing declining is the number of films produced per year, as the majors go for big budget blockbusters backed by massive marketing campaigns to keep two of the scarce resources of the media economy — time and attention — concentrated on their wares. Table 2 below shows the following trends: a declining number of blockbusters produced by MPAA members, rising number of independent produced films, and a greater number of films overall.

Source: MPAA (2011). Theatrical Market Statistics.

So, the next time you hear about the movie industry (or any other media sector for that matter) falling on hard time because of digitization, the Internet, piracy, and so on and so forth, think about these trends. They are important because the same interests that would like us to think that the sky is falling are using these mistaken impressions to push for changes to copyright laws and a clamp down on Internet Service Providers that wouldn’t otherwise have a hope in hell of succeeding.

 

Review Essay: Network Nation — Inventing American Telecommunications

Here is a forthcoming review essay that will appear shortly in the journal, Business History. The book that inspired the review is: John, Richard R.  (2010). Network Nation: Inventing American Telecommunications. Cambridge, MA: Belknap Press of Harvard University Press, i-viii, 520pp.

Network Nation is an important book by one of the most highly-regarded communication and media historians in the U.S., Richard R. John. It is probably the most substantive and innovative book to come out on the telegraph and telephone in all their business, political and cultural aspects in years. In the following essay, I review the book and place it within the scholarly literature on the topic, while critically examining some of its key arguments.

Situating the development of the telegraph and telephone into a longer history than usual, this outstanding book takes its point of departure not from the advent of the telegraph in the 1830s or 1840s, but decades earlier with the Post Office Act in 1792, which John dubs “one of the most far-reaching pieces of legislation enacted in the early republic” (p. 18). The book finishes with the consolidation of the ‘regulated natural monopoly’ regime for telecommunications and market segmentation between the telegraph, telephone and radio in the 1920s, a situation that stayed remarkably stable for most of the rest of the century.

John uses the Post Office Act of 1792 to set the scene because its capacious and open-ended mandate gave the government-owned Post Office a huge role in facilitating the flow of ‘intelligence’ and cultivating republican democracy. In its wake, the Post Office became the first national ‘medium’ to bring correspondence and news to “every man’s door” (p. 20). Its exchange system allowed publishers to swap newspapers and magazines across the country free of charge. It also established the notion that an integrated network operated under unified administrative control – i.e. a network monopoly – and guided by enlightened civic ideals could be a useful support for a multiplicity of commercial uses as well as a diversity of voices, a sturdy pillar in other words of a dynamic market economy, a democratic society and a free press. Most observers, including the early developer of the electric telegraph, Samuel Morse, simply took it for granted that the telegraph would become an arm of the Post Office.

Together, these factors set down several durable principles: first, that networks tended toward monopoly, and that whether that was a good or bad thing depended on how they were managed and regulated; second, that single networks supported a multiplicity of uses and thus whether a monopoly was managed and regulated poor or well was crucial to the commercial and social life of the nation; third, and perhaps most surprising in light of prevailing opinion, the U.S. government can take whatever steps its citizens approve of to improve the media environment. Indeed, the first amendment, as John stresses more than once, poses few obstacles to doing just that. Network Nation crystallizes these vitally important principles between the late-18th and early-20th centuries, while their relevance in the 21st century is underscored by ongoing debates today over ‘network neutrality’ and broadband internet development (Benkler, 2010) and the so-called ‘crisis of journalism’ (McChesney & Nichols, 2010). Indeed, the latter authors, in particular, draw very heavily from John’s work to make their case for the steps needed to shore up what they see as the deteriorating state of the U.S. press in the age of the Internet.

Despite the wide scope for government intervention established by the history of the Post Office, government ownership never did extend to the telegraph or telephone in the U.S., other than for a brief and deeply unsatisfactory period at the end of the First World War. Why?

According to John, several factors conspired against the idea of the ‘postal telegraph’. Initially, lingering memories of failed public works projects in the late-1830s as well as the high expense of the US-Mexico War (1846) were two such factors. A more important and enduring consideration was the popular view that the telegraph was the luxury of a wealthy few for whom private enterprise would serve just fine rather than a necessity of the masses requiring public intervention. The same view hung over the development of the telephone until the push for its popularization began to yield fruit after the turn-of-the-20th century. Last, and most importantly, the broadly held belief that competition and regulation were better than any monopoly, government or private, consistently stood in the way of efforts to extend the Post Office’s mandate to include the telegraph or any other means of electrical communication.

John calls this latter set of beliefs the ‘antimonopoly view’ and places a great deal of emphasis on it. He argues that the ‘antimonopoly creed’ should not be seen as either synonymous with, or as being led by populist, agrarian movements in the 1860s or 1870s, or as belonging to the progressive era. Instead, the creed appealed to a disparate cross-section of interests; it was also a mainstay of liberal political economy. According to John, antimonopoly movements had already flourished in cities as well as in state and federal politics for decades before rural populist and labour movements made telegraph and telephone monopolies a target of their criticisms and reform efforts during the 1860s and 1870s (rural populism) or the progressive era (1880-1920), respectively, as most historians claim (e.g. Fischer, 1992; Gabel, 1969; Schiller, 1996).

By the early-1840s, as John observes, “the telegraph business [had] congealed around . . . the [federal] patents that Morse obtained in 1840 and 1846” (p. 43). Indeed, these patents propelled the efforts of two of Morse’s business partners, the former Postmaster General Amos Kendall and Maine Congressman, Francis O.J. (Fog) Smith, who set up companies along the U.S. Atlantic coast and into Canada, while licensing individual franchisees to extend the lines in every other direction. Kendall and Smith attempted to cobble these separate lines into a single, albeit loosely federated system in the early-1850s, but failed. Nonetheless, their attempt to create a ‘single system’ borrowed the vision of a universal integrated network from the Post Office, but also raised the spectre of a private telegraph monopoly, and one buttressed by the Federal Government’s strong defense of the Morse patents. Antimonopolists reacted to such a prospect by appealing to state governments to charter general corporations to foster greater competition, with the New York Telegraph Act (1848) emblematic of the trend. A flood of rivals entered the field, but most quickly failed and/or were acquired by Western Union (est. 1855) and the American Telegraph Company (est. 1859), before being folded into a six-member cartel created in 1857 and led by Western Union: the North American Telegraph Association.

Early ‘network builders’ essentially created a “rich man’s mail service” as well as vastly over-capitalized firms that paid handsome dividends to themselves. However, by the late-1860s a new managerial type began to emerge, as typified best by William Orton who became President of Western Union after his predecessor Hiram Sibley’s ouster in 1867. Far more the professional manager than financial speculator, Orton and others like him play a star role in Network Nation. This is because, according to John, they built the early prototypes of the modern corporation, created complex technological systems, embraced modest civic goals, and steadily committed to more research and development – decades before these phenomenon usually appear in the economic and business history literature (Berles & Means, 1932/1968; Chandler, 1977).

The rise of the professional manager also marked the potential triumph of expertise over speculative finance, although the pendulum continued to swing back and forth between the two until the 1910s when the ‘regulated natural monopoly’ and ‘market segmentation’ (i.e. the lines drawn between telegraph, telephone and radio) regime was finally locked into place for the next seventy years, and with expert managers in charge. Professional managers such as William Orton and, later, Theodore N. Vail also served to fragment the antimonopolist voices because while these two icons of the modern manager shared the latter’s disdain for financial speculators and greedy private monopolies, they believed that well-run monopolies would inevitably be a fixture of late-19th and 20th centuries’ capitalism, and a good thing to boot. As time passed, the earlier experience of the 1840s and 1850s only seemed to confirm such views by demonstrating that the various attempts by individual states to create competition were contrived, and that monopolies in telecommunications were, as it were, natural, and thus in need of competent, uniform regulation, which meant regulation by the federal government. In sum, the initial question of which political economy would prevail — individual states and contrived competition or a strong federal state and regulated ‘natural monopoly’? – gave way by 1910 to a firm answer fully in favour of the latter option. John’s sustained focus on how these two distinct political economies played out during the development of the telegraph and telephone is another major contribution of this volume.

Despite the rising class of professional managers, however, Western Union’s ongoing penchant for meddling in elections, political interference in its operations by Congress (e.g. the ‘dragnet subpoenas’ case), as well as its tight ties to the New York Associated Press continued to stir the antimonopoly forces. Indeed, the only thing worse than the telegraph monopoly, many contemporaries felt, was the ‘double-headed news monopoly’ between Western Union and the NYAP (p. 147; also Blondheim, 2004). This situation broadened the range of antimonopoly voices by raising the ire of newspapers and many journalists outside the NYAP umbrella alongside the already existing, and intensifying, calls for reform by other critics. The situation also, as Menahem Blondheim illustrates, captured the attention of Congress between 1866 and 1900, a period that he and John generally see as a crucial time in the development of the commercial media and media regulation in the US.

Ironically, however, the mounting opposition in the 1870s arguably played into the hands of the era’s most reviled robber baron, leading to Jay Gould’s hostile take-over Western Union in 1881. John does not denounce or demonize Gould, but methodically picks apart his seven-year campaign (1874–1881) to gain control of Western Union. Over the course of this time, John stresses how Gould manipulated antimonopoly sentiment, Wall Street, the press, and the prevailing state-centred political economy to compete with Western Union before finally taking it over. Gould was already the owner of two or three New York newspapers (the Tribune, the World, and Mail and Express) at the time, and used those as vehicles to advance his own ends. His acquisition of Western Union tightened the company’s ties to the press even further. For all this, however, and crucially as John observes emphatically, the long-term rivalry preceding the take-over played out on the frontiers of technological innovation as Gould and Orton at Western Union threw vast sums of money at the leading technological geniuses of their time – Thomas Edison, Elisha Gray, and Alexander Graham Bell. The results produced several cutting-edge innovations that shaped the communication and entertainment industries into the 20th century: (1) the quadraplex technology that doubled the speed of telegraphs, (2) the telephone, and (3) the phonograph.

Historians often point to Orton’s decline of an offer to buy all of Bell’s telephone patents for $100,000 in 1876 as proof of Western Union’s status as a stodgy monopolist adverse to technological progress. John’s remarkable account, however, turns prevailing wisdom on its head by explaining that Orton initially spurned Bell only because he was backing Elisha Gray and Thomas Edison in a far bigger struggle over the telegraph and telephone industries. Far from shying away from the telephone business, Orton rushed headlong into it. In fact, by late 1879, Western Union’s municipal telephone exchanges were competing head-to-head with, and even growing faster than, those of the National Bell Telephone Company in several major U.S. cities (p. 169).

Competition in telephony, in other words, did not emerge from small, rural outfits after the expiry of Bell’s telephone patents in 1893, as most scholarly accounts would have it (e.g. Fischer, 1992; Gabel, 1969). Instead, it had emerged nearly fifteen years earlier in major U.S. cities – New York, Boston, Chicago (and Montreal in Canada, one might add) – and between the two biggest corporations of their day: Western Union and Bell. Rather than continuing to compete, however, the two companies struck a deal in November 1879 intended to keep Gould at bay. Consequently, all of Western Union’s telephone patents and municipal exchanges were given to Bell in return for annual royalties and mutual pledges to stay out of one another’s turf. While John does not say so, the agreement’s reach led to a similar outcome in some of Canada’s larger cities, notably Montreal. By this point, John’s account has completely upset the settled view of Western Union and technological innovation. Equally impressive, it has drawn the telegraph and telephone into a far vaster sweep of history than usual that stretches back to the origins of the postal system in 1792, on the one side, and then forward to the modern entertainment industries in the late-19th and early-20th centuries, on the other. It is an expansive canvas, indeed, and the author fills it in with masterful, colourful and detailed strokes.

John supports these radical departures from conventional knowledge with a wealth of archival evidence, some of which has been previously unavailable. He also invites us to see many familiar things from striking new angles: e.g. semi-autonomous Bell operating companies scattered in cities across the country and loosely affiliated with National Bell, the grubby role of crooked politicians who used their ability to grant municipal telephone franchises to line their own pockets (most notoriously in Chicago), the politics of rate caps that began in the cities in the 1880s (not the countryside, contra Fischer, 1992; Gabel, 1969, etc.) but which paved the way everywhere for the popularization of telephone service after 1900. By the 1920s, the early principles of ‘government ownership and ‘regulated competition’ were replaced by regulated monopolies and market segmentation between the telegraph, telephone and radio, divisions maintained until the passage of the Telecommunications Act (1996) seventy years later.

While John cuts many new paths, I also felt there were several points where he should have given more credit to others where it is due. The concept of market segmentation is a good case in point. Scholars have examined the factors that have separated different media along technological lines for decades, but none are referenced. Erik Barnouw (1975), for example, discussed the division of radio broadcasting from the telephone, telegraph and electronics industries in the 1920s long ago. Ithiel de Sola Pool (1983) likewise examined the segmentation of the telecommunications, broadcasting, computing and publishing industries over the course of the 20th century, before visiting the potential for media (re-)convergence today, as has Robert Babe (1990) and others, including this author. The common point behind these sources and John’s Network Nation is that the separation of media along technological and functional lines is primarily an artifact of corporate strategy, government policies and popular pressures, rather than underlying technological conditions. While the point is reasonably well-known, it is not so common as to not warrant citing at least some of the relevant scholarly literature.

The concepts of ‘methodless enthusiasm’ and ‘ruinous competition’ that also play a significant role in John’s account were first used, to the best of my knowledge, in a systematic way in Robert L. Thompson’s (1947) classic, Wiring a Continent, which also is not cited. Add in that author’s third concept, ‘strategic consolidation’, and the trilogy of concepts nicely captures the trajectory and tenor of developments in the telegraph industry and, as John shows in detail and with due regard to the specific twists and turns, the telephone and radio industries. John also spars with some phantom foes regarding agrarian populism and the progressive era, but seems too polite to name them. A few small errors are important. For instance, on page 184 John states that Gould owned three newspapers, but on page 187 he claims it was two.

Finally, John fails to seriously consider how the “network nation” is intertwined with global, or at least trans-Atlantic trends, cutting short his analysis in important ways and betraying an implicit methodological nationalism as a result. The ‘cheap telegraph rates’ movement in the 1880s, for example, did not just reflect intensified angst with Western Union after its take-over by Gould, but a broader global, or at least trans-Atlantic trend. Henniker Heaton, the Australian born British Member of Parliament, for example, was influential in U.S. circles and quoted often in the New York Time and New York Herald on the topic (see Winseck & Pike, 2007, ch. 5).

John’s examination of the post-WWI re-organization of the electric communication industries in chapter 10 betrays the same limitation. Here, John problematically uses the assertion that “every cable linking the United States and Europe in the First World War was under British control” (p. 398) to explain why Theodore Vail stood solidly behind the Wilson Administration’s take over of the telegraph, telephone, radio and international cables. The problem is, however, that Vail had announced Western Union’s take-over of the British trans-Atlantic cables nearly ten years earlier, when he was President of the company. According to the New York Times, the acquisition gave Western Union “a real system of competitive cables, free of the stigma . . . of foreign domination” (“Letters by cable is . . . .”, 1911, p. 6). The pre-eminent British cable expert, Charles Bright, agreed, but bemoaned the fact (Bright, 1911, p. xvii; also Winseck & Pike, 2007, pp. 187-190).

As a matter of fact, Western Union and another predominantly U.S-based company, the Commercial Cable Company, had been staking out ever-stronger positions in the trans-Atlantic cable market since the 1880s. Adding this element to John’s otherwise brilliant account would not diminish it one iota, but actually enrich it by showing how characters that are already significant in his story – e.g. Jay Gould, James Gordon Bennett, the New York Herald, the Postal Telegraph Company — fit into the global picture. By WWI, the British did leverage their control over cable landing rights to implement a comprehensive system of cable surveillance, but this was not unusual and both companies begrudgingly went along.

These shortcomings aside, Network Nation is a ground-breaking work. Its account of the importance of bringing ‘intelligence’ and communications innovations to “every man’s door” from the late-18th century onwards has a great deal of relevance to current debates about broadband Internet access and the ‘crisis of journalism’, among many others. It is also part and parcel and on the leading edge of a welcomed renewal that is taking place in media history. Wonderfully written, brilliantly told, and beautifully illustrated, Network Nation will quickly assume its place alongside other seminal works, such as Paul Star’s (2004) The Creation of the Media.

References:

Babe, Robert E. (1990). Telecommunications in Canada. Toronto: University of Toronto.

Barnouw, Erik (1975). Tube of Plenty. New York: Oxford University.

Berle, Adolf A. & Means, Gardiner C. (1932/1968). The Modern Corporation and Private Property. New York: Harcourt, Brace & World.

Benkler, Yochai, Faris, Rob, Gasser, Urs, Miyakawa, Laura, & Schultze, Stephen. (2010. Next generation connectivity: A review of broadband Internet transitions and policy from around the world. Cambridge, MA: Berkman Center for Internet & Society. URL: http://cyber.law.harvard.edu/publications/2010/NextGenerationConnectivity [Last visited March 14, 2011].

Blondheim, Menahem (2004). “Rehearsal for Media Regulation: Congress Versus the Telegraph-News Monopoly , 1866-1900,” Federal Communications Law Journal, 56, 299-328.

Bright, Charles (1911). Imperial Telegraphic Communications. London: P.S. King & Son.

Chandler, Alfred, Jr. (1977). Visible Hand: the Managerial Revolution in American Business. Cambridge, MA: Harvard University Press.

Fischer, Claude (1992). America Calling: A Social History of the Telephone to 1940. Berkeley, CA: University of California Press.

Gabel, Richard (1969). The early competitive Era in Telephone Communications, 1893-1920. Law and Contemporary Problems, 34(Spring), 340-359.

“Letters by cable is the plan now”, New York Times, September 15, 1911, 6.

McChesney, Robert & Nichols, John. (2010). The Death and Life of American Journalism. Philadelphia, PA: Nation Books.

Pool, Ithiel de Sola (1983). Technologies of Freedom. Cambridge, MA: MIT.

Schiller, Dan (1996). Theorizing Communication. New York: Oxford University Press.

Starr, Paul (2004). The Creation of the Media. New York: Basic Books.

Thompson, Robert L. (1947). Wiring a Continent. Princeton, N.J.: Princeton University Press.

Winseck, Dwayne & Pike, Robert M. (2007). Communication and Empire: Media, Markets and Globalization, 1860-1930. Durham, NC: Duke University Press.

Carleton J and Comm School Grad Conference – March 10-11th

My nose has recovered from surgery faster than I anticipated, so it looks like I can attend the Carleton University J and Comm School Grad Student Conference tomorrow and Friday after all.

It looks like a great conference, with people from far and wide, and a very promising keynote talk, the Paul Attallah Lecture, t0morrow at 5pm by Dr. Lisa Nakamura.

CRTC Approves More Media Consolidation: BCE’s Acquisition of CTV / CHUM (again)

This is a first take on today’s decision by the CRTC to approve BCE’s return to the broadcasting business (full decision here).  For those with what constitutes an elephantine memory in these fast and harried times, BCE had taken CTV over once before, in 2000 and failed. It left the television business six years later.  Today, it returned with the CRTC’s blessing and typical sop thrown to the Canadian ‘broadcasting system’, albeit at perhaps an even more meagre and self-serving level than usual.

The decision allows Bell Canada Enterprises a second run at making vertical integration and so-called synergies work between its telephone, satellite and ISP (i.e. network infrastructure) businesses and the largest media group in the country, with its CTV and A-channel networks, 31 satellite and cable television channels, 28 local television stations and 33 radio stations.  The only things really different than 10 years ago is that BCE has dramatically scaled back its ownership stake in the Globe & Mail (the Thomson family holds the rest) and sprawling media conglomerates have, by and large, gone out of fashion since the turn-of-the 21st century.

Another important thing that should catch our eye is that the value of CTV is now less than it was a decade ago, not because the tv business has shrunk — overall it has expanded from a $5 billion industry to one worth $7 billion (adjusted for inflation) — but because the first six year’s of BCE’s tenure were pitifully poor. CTV was worth less than half its original value when BCE left in 2006.  Today, and after all the growth in the industry plus the acquisition of CHUM, the combined value is about the same as CTV was ten years ago: $2.45 billion.

That number is important because it’s the one that the CRTC uses to peg the value of the contributions that BCE will have to pay into the ‘broadcast system’ in order to gain the CRTC’s blessing. At ten percent, BCE’s contribution is $245 million.  Even worse, $65 million of that amount will go to directly into the pockets of Bell TV, BCE’s direct-to-home satellite provider.

The rest is for the usual content, news, drama, culture, music, etc. etc. funding — the ‘cultural industries’ sop that the CRTC requires and that company’s on the prowl exploit to line up support for their take-overs from media workers, directors of Journalism and Communication schools across the country, and so forth. The result is greater media concentration blessed by the state with a few crumbs off the table for others with a stake in the game.

Others, with broader interests can go packing.   The CRTC fudges the language to conceal the fact that while vertical integration and media conglomerates are on the wane elsewhere, they’re on a tear here in Canada, despite the regulator’s supposed new rules limiting media concentration set into place in 2008.

Elsewhere, the crash in the value of the turn-of-the-21st century star of collosal-sized media conglomerates, Time Warner, wiped out nearly a quarter of a trillion in market capitalization, falling from an estimated worth of $350 billion in 2000 to $78 billion in 2009. AT&T also went belly-up in its aggressive move from the wires into all things media, only to be resurrected in 2005 when the moribund company was bought out by SBC. Vivendi Universal in France is another poster child of media conglomeration gone bad.  Others examples are as easy to pile up as leaves in autumn.

But here in Canada, in a manner akin to what takes place in oligarchic capitalist societies — think Russia and South America — giant media enterprises are again on the rise. Today’s blessing of BCE’s acquisition of CTV/CHUM (A-Channel) follows last October’s approval of Shaw’s take-over of the financial wreckage that was Canwest television, and at fire-sale prices to boot!

Of course, the trend is not all in one direction. Indeed, swimming against the tide, in the U.S., Comcast’s, that country’s largest cable provider, acquisition of NBC – Universal was approved by the Dept. of Justice and FCC (but also see Commissioner Michael Copp’s scathign dissent). Besides being exceptions to the rule, it is interesting to compare the US decision approving Comcast’s take-over of NBC with the CRTC’s decision to sanction BCE’s acquisition of CTV/CHUM.

In the US, the Dept. of Justice and FCC put fairly tough demands on Comcast to make its television and film content available to Internet competitors and ‘online video providers’ (OVPs), to adhere to open Internet requirements and to “offer broadband services to low-income Americans at reduced monthly prices; and provide high-speed broadband to schools, libraries and underserved communities, among other benefits” (FCC Press Release).

The CRTC, in contrast, will look at issues of vertical integration in a future set of hearings that it intends to hold on the issue in June.  Any of the other issues are not even on the table, or at least so it appears.

Well, another sad day in Canada. A great opportunity to articulate vision and to implement ideas and practices that could build one of the most open media systems in the world.  Instead, at the CRTC and in Canada’s media industries, it’s business as usual.

Social Media and Memory Ownership

I was reading Digg today to figure out whether or not I should join, and even just how to do it.  It is not easy if you don’t have facebook, twitter, google accounts to link to.  Digg wants you but more importantly, they want your network of personal relationships and associations.

As some of you will know, I like reading things like corporate annual reports, ‘terms of use’ statements, privacy pledges, and other techno-corporate-legal-bureaucrat mumbo jumbo.  Because just as your eyes are ready to glaze over, something will often jump out of these limp and lazy sources that is hugely important.  If you’re real lucky, the insight might be valuable for a long time.

So today I thought about Digg.  After the frustrating discovery that for Digg to be useful you had to join facebook, google and company, I decided I was up for even more torture.  So I read Diggs’ ‘terms of use’ and ‘privacy‘ statements. As I said, these things can be useful, interesting, occasionally they’re even fun to read. This time did not disappoint.

Case in point. Rule 3 of Diggs’ Terms of Use raised interesting question about what happens to our ‘memories’, or ‘digital persona’, when they are unplugged from the network social media environment? And who owns our digital data image?

Here’s a quote from Rule 3 that got me thinking:

“Digg may change, suspend or discontinue the Services including any content (including, but not limited to text, user comments, messages, data, information, graphics, news articles, photographs, images, illustrations, software, audio clips, and video clips) for any reason, at any time . . . .”

The reference to time, of course, was a dead give away that we should think about memory. But seriously, if we build up a digital persona of ourselves using text, data, comments, graphics, images, audio clips, etc. etc., should someone else be able to “change, suspend, or discontinue” our ‘digital’ replicant as they please?  No forewarning. No requirement to send a ‘zip.file’ containing all your data, just bam, unplugged. Company’s been taken over, gone bankrupt, whatever, you’ve just lost a lot.  This is what happened to many music afficionados early in 2010 when Google’s blogspot pulled the plug on thousands of music related blogs in early 2010.

What would a ‘good digital persona storage’ practice or policy look like?

These are not just questions about memory (or ‘digital personae), but of property as well.  When it comes to the question of property, Digg is pretty clear that “[U]ser information is typically one of the business assets that . . . we may choose to buy or sell”. Personal information, in short, is private property.  That would seem to let the company do whatever it wants the personal information that gathers around its activities. Therein, however, lay the tension between property and memory.

Personal information can never be Digg’s exclusive ‘business asset’, of course, because it is so cheap and easy to reproduce, so memory and ‘digital replicants’ are relatively safe.  The possibility that our ‘digital personal’ or ‘replicants’ could simply be yanked from cyberspace, however, is still cause for concern. The fact that Digg has adopted creative commons principles, however, at least makes repeating ourselves easier.

The Search for ‘New Media Models’

Here’s an interesting link to a talk given by Alan Rusbridger, the editor of the Guardian in the UK.  It offers a glimpse of how he, and the Guardian, see the emerging media environment. The tone that he strikes is interesting insofar as it is not one saturated with the ‘journalism in crisis’ trope, but rather the opportunities that exist as established models are forced to adapt to fast paced and relentless changes.

Rusbridger demonstrates an openness and understanding of media trends that appears well ahead of his counterparts in NA media.  He also fleshes out some of the potentials of a ‘collaborative model’ of journalism between the traditional media and new developments in social media, from WikiLeaks to Twitter. In this regard, he points to how the Guardian now casts itself not just as a newspaper publisher, but also a platform for OPC (other people’s content).

Finally, Rusbridger offer his thoughts on the way ahead not by throwing history and serious thinkers from the past overboard, but by embracing them. He discusses the continued threats of media concentration, highlights the undeniable dependence of most media on some form of subsidy, whether advertising, allocations for public broadcasters, wealthy patrons, etc. by drawing on Walter Lippmann’s 1922 classic, Public Opinion and waxes about the relationship between Raymond William’s observations from 1958 on communication and culture and conditions today.

In other words, media ownership and concentration, how stuff is paid for, and the relation of each to bigger questions about the kind of societies and cultures we currently live in, and those that we might imagine, are all given a serious nod.

Wikileaks and the Digitally Networked Free Press

Yochai Benkler has just published a fascinating study of Wikileaks and what he calls the free and irresponsible press. The study’s focus is on WikiLeaks, the Internet-based ‘whistle blower’ site, but it is more than that; it is, as Benkler states, a battle for the soul of the networked fourth estate.

Benkler makes several key arguments.  The most important in my view is that WikiLeaks is part and parcel of a broader set of changes that, once the dust settles, will likely stabilize around a network media ecology consisting of (1) a core group of strong traditional media companies; (2) numerous small commercial media (e.g. Huffington Post, the Tyee, Drudge Report, Global Journalist, etc.), (3) non-profit media (e.g. WikiLeaks, Wikipedia), (4) partisan media outlets (e.g. Independent News Network, Rabble.ca, Daily Kos, TalkingPointsMemo) and (5) hybrids that mix features of all the others.

Second, he argues that far from the Internet aggravating the ‘crisis in journalism’, it may in fact be improving the quality of the new media and journalism overall. According to Benkler, the current turmoil amongst traditional news outlets is the result of so many self-inflicted wounds that have festered for decades.  The rise of the internet and the changing technological and economic basis of the media magnifies these problems, but it is not responsible for them.

Instead of bemoaning the impending ‘death of journalism’, Benkler strikes a cautiously optimistic note. The blogosphere and Internet are undoubtedly bastions of vanity, personal opinions masquerading as fact, and where bellicose politics trumps civility. Crucially, they are also sites where new forms of journalism, new approaches to knowledge production and new kinds to creative expression are emerging that have the potential to make a mighty contribution to journalism and democracy.

Wikileaks is the poster-child for some of the potentials of non-profit, ‘crowd-sourced’, investigative journalism. More broadly, the poster child of ‘crowd-sourced’ knowledge is Wikipedia, a socially produced online encyclopedia that now ranks among he top 7 or 8 most visited websites in the world — except in countries such as China, where it is hard to access. Wikipedia’s credibility ranks on par with venerable entities such as the Encyclopedia Britannica.

Benkler is keen to show that unless we recognize that relatively new actors are making valuable contributions to the networked media environment, we will end up with impoverished journalism and weakened democracies. A key step in this process of recognition is to understand that outlets such as WikiLeaks are fundamentally ‘journalistic’ in function.

Third, to the question of whether or not WikiLeaks is a ‘news organization’ and its key players, most notably Julian Assange, journalists, Benkler offers an emphatic yes.

The first proof of this is that, since it began in 2006, WikiLeaks has received several awards recognizing it as such from Amnesty International and the British magazine, Index on Censorship. More recently, it has been nominated for a Nobel Peace Prize.

Second, it is a global news agenda setter. In 2010, it did this not just once, but four times:

(1) the release of the ‘collateral murder’ video in April;

(2) the release of the Afghan and (3) Iraq war logs in July and October, respectively, and

(4) the release of 1,900 diplomatic cables beginning November 26th.

This is no small feat. It would seem to indicate that Wikileaks is not marginal to journalism, but central to it.

Third, Wikileaks has worked hand-in-glove with the most prestigious news outlets in the world: The Guardian, the New York TimesDer SpeigelLe Monde and El Pais. Rather than simply dumping all of the 250,000 ‘embassy cables’ that it claims to hold into the public domain, only 1,900 or so have been released since late November. Its materials, at least after the problematic “collateral murder” video, have been selected, edited and presented according to professional news values.

Fourth, working with these news organizations has maximized attention for these stories. It has also allowed these news organizations to bolster their already strong positions in local and ‘global news markets’. The cables were leaked pretty much simultaneously by Wikileaks, The Guardian, the New York TimesDer SpeigelLe Monde and El Pais.  The benefits of cooperation cut both ways.

Fifth, this is better than the ‘old days’. For example, in the WikiLeaks’ case, the NYT consulted with the Obama Administration before releasing the ‘war logs’ or the ‘diplomatic cables’. Such deference might seem odd, unduly deferential perhaps, and it is. It was better than sitting on the material for a year, however, as the NYT did in 2005 at the behest of the Bush Administration in the context of the NSA/AT&T unauthorized wiretap case (see mea culpa by NYT public editor Byron Calame, Jan. 1, 2006).  As a non-profit source, and without the need to stay in good standing within the circles of political, military and corporate power, Wikileaks does not have to assume such a deferential stature.

Awards, agenda-setting, cooperation with prestigious news organizations, mutually beneficial arrangements, and no small degree of reliance on long-standing professional practices and some deference to state power, however, are still not enough, it seems, to prove WikiLeaks’ journalistic credentials.  Despite all this and the careful, indeed, responsible approach it took (i.e. as a free and responsible press), WikiLeaks’ actions led to paroxysms in some quarters.

Calls for execution, treason charges, and so forth would normally seem to fall beyond the pale of ‘normal democracy’, but in the WikiLeaks’ case they have heavily framed the discussion. The coverage of the press has been, at best, poor when it comes to specifics about the case. Two-thirds of news reports have mistakenly implied that WikiLeaks simply dumped everything it had into the public domain. Several members of the U.S. Congress called for Assange to be tried for treason; a common tactic was to label him a terrorist. This is not a political culture in which a free press flourishes.

Two Republican presidential candidates, Sarah Palin and Mike Huckabee, as well as some hard-line conservatives in Canada (e.g. Tom Flanagan and Ezra Levant) called for Assange’s execution. All of these actions were not just over-the-top; they are a threat to a free press and to democracy.  Just how over top they are is indicated by the measured response of the U.S. Defense Secretary, Robert Gates. As Gates put it,  “Is this embarrassing? Yes. Is it awkward? Yes. Consequences for U.S. foreign policy? I think fairly modest” (quoted in Benkler, p. 16).

Highlighting WikiLeak’s status as a journalistic organization reminds us that rather than being beyond the pale, it should be situated firmly within the parameters of the free press tradition. The collaborative venture exercised an editorial hand with a keen eye to minimizing threats to humanitarian workers and to military operational security. The WikiLeaks case offers a glimpse of a template for a ‘new cooperative model’ between established news outlets and new comers. It can help to move us beyond the snooty idea that journalism is whatever traditional media tell us it is.

‘Networked journalism’ and ‘crowd-sourcing’ are being rapidly integrated into the operations of well-established news outlets. Such activities are not just free-loading on the ‘content’ of the mainstream press, but rather sometimes function as rivals, while partners at others.  As the uprisings across the Arab world indicate, the network public sphere and crowd sourcing are fast becoming standard operating procedure in the global news system.

The history of cooperation between WikiLeak and the above news outlets has been far from smooth. It has been rife with tensions and personal animosities, especially, it appears, between at least one senior New York Times’ editor and Julian Assange. Beyond individual personalities, constant claims about ‘journalism in crisis’ have made it easy to cast the Internet in the role of villain. Yet, the bottom line in all this jostling between the ‘new’ and the ‘old’, is that members of the networked fourth estate deserve the full rights and protection of the ‘free press’ no less than ‘pamphleteers’ and well-established news outlets such as the Globe & Mail, New York Times or the Nation do.

Wikileaks sturdy journalistic credentials, Benkler argues, makes it all but impossible that any direct attempt by the U.S. Government to put WikiLeaks out of business could pass legal and Constitutional muster. The New York Times’ Pentagon Papers case in 1971 is, in fact, very instructive to the present situation, despite constant denials to the contrary.

The key figure in the Pentagon Papers case, Daniel Ellsberg, has already argued that Assange and WikiLeaks are no more treasonous and outside the scope of the free press protections of the U.S. Constitution than he and the New York Times were in the era of the Vietnam War. Benkler concurs, and walks us through the legal steps as to why this is so:

  • unless the government can show that publication will result in direct, immediate and irreparable harm to the U.S., or its people, any attempts to prevent publication will run foul of the First Amendment;
  • journalists cannot instruct their sources to steal documents, but they are not obligated to determine or reveal how the source obtained them;
  • in times of war, there is no better counter to ‘strong presidents’ than a free press.

The parallels between these two events have been obscured by denial and the tendency in journalistic and other circles to belittle Internet-centric forms of journalism and commentary in the blogosphere. Yet, investigative journalism and commentary are not the sole dominion of the traditional press. They are a signature feature of Internet-based news and commentary outlets. Those qualities are more important than ever in light of the constant erosion of these capabilities within the mainstream media over the past two decades or so. Hot heads and conservatives may not like dissent, but that’s why freedom of speech, press and association exists to begin with.

The fact that WikiLeaks is so solidly at one with journalistic and free press traditions helps to explain why neither it nor any of the five major newspaper organizations — The GuardianNew York TimesDer SpeigelLe Monde and El Pais – that it is working with have faced direct efforts by the U.S. Government to suppress the publication of WikiLeaks’ documents. Although, as the Twitter case indicates, this was not for a lack of trying (see here for earlier post).

The problem, however,  is that what the state has not been able to obtain by legal and constitutional measures, it has been able to gain with remarkable ease from private corporations and ‘market forces’.  Thus, buckling under the slightest of pressure, Amazon removed all of Wikileaks’ content from its servers on the same day (December 1, 2010) that independent Senator and Senate Committee on Homeland Security and Governmental Affairs chair, Joe Lieberman, called on “any . . . company or organization that is hosting Wikeleaks to immediately terminate its relationship with them”.

Two days later, the company everyDNS delisted Wikileaks from its domain name registry. As a result, Internet users who typed wikileaks.org into their browser or clicked on links pointing to that domain came up with a page indicating that the site was no longer available (in addition to Benkler, see the Guardian’s timeline on the sequence of events).

Wikileaks quickly found a new home at webserver firm OVH in France. This connection, however, was also severed after the French Industry Minister warned Internet companies on December 4 that there would be “consequences” for helping to keep Wikileaks online. The Swedish DNS provider, Switch, faced similar pressure, but refused to buckle. It continues to maintain the WikiLeaks.ch address that Internet users still use to access the site. It is also under a constant barrage of Distributed Denial of Service (DDoS) attacks. The Swedish-based Pirate Party also stepped in on December 5 to host the “cablegate” directory after they were taken off line in France and the US. Twitter has also resisted strong arm tactics from the U.S. government (see Twitter does the Right thing).

While Amazon and everyDNS took out part of WikiLeaks technical infrastructure, several other companies moved into to disable is financial underpinnings. Over the course of four days, Paypal (owned by eBay) (December 4), MasterCard and the Swiss Postal Office’s PostFinance (December 6), and Visa (December 7) suspended payment services directed by donors to the site.

The lessons here are three-fold. First, that private companies all too often all too eager to comply with political directives from the state. Cutting Wikileaks off from the key technical and financial resources after coming under the slightest bit of pressure essentially means that several key private businesses willing served as proxies for the U.S. and other governments to do what they would otherwise be prevented from doing by constitutional protections for the free press.  This is a real threat to the networked free press. It is also one of the reasons that Wikileaks exists in the first place.

Second, efforts to suppress unwanted speech are never complete. The distributed nature of the Internet and dispersed actors committed to open media and a free press means that sites can and will be relocated elsewhere. However, that should not detract from the fact that fundamental open media principles have been seriously compromised in the meantime.

Third, the reticence to recognize new forms of journalism and to lash ourselves to the mast of the ‘old’ media is compromising the cultural foundations of the ‘networked free press’. A hostile political and cultural environment is not conducive to a ‘free press’. The response of traditional media organizations, in particular, in the U.S., and the New York Times especially, has been ambivalent on this point. By collaborating with WikiLeaks, they have polished the latter’s journalistic credentials. Just as importantly, they have also once again demonstrated that gaining access to attention in a cluttered media environment still requires ‘big media’.

As Benkler emphasizes, there are contrasts in how different news organizations see WikiLeaks. In contrast to the reluctance of the New York Times to treat it as anything more than just a source, and a mangy one at that, the Guardian sees its experience with WikiLeaks as a template for a ‘new model of cooperative journalism’.  In fact, the Guardian and BBC are way ahead of their North American brethren when it comes to using ‘crowd-sourcing’ and ‘user-created content’ in news coverage.

The trend was kick-started in the UK with the London bombings in July 2005, and has continued to play a strong role since. If the current uprisings spreading through the Arab world are an indication, this ‘hybrid’ genre of news is now  moving quickly from the margins to the mainstream.

The constant hand-wringing about the ‘crisis of journalism’ in the U.S. (and to a degree in Canada), and the tendency to lay this at the doorstep of the Internet, blogs and readers unwilling to pay or incapable of discerning good journalism from bad, has undermined the status of the networked free press in the culture at large.  This ambivalence, along with the hard rights ability to reach easily for the ‘terrorist’ trope and unleash a vitriol hard to imagine in ‘normal times’, further compromises the ‘cultural protections’ needed for a networked free press.

Ultimately, Benkler does a great job, as he so often does, in drawing our attention to not just how the technology and economics of network media are decisive, but also how constitutions and culture play a pivotal role in determining whether the contribution of network media will, on balance, be a boon or bust for democratic societies.

Follow

Get every new post delivered to your Inbox.

Join 129 other followers

%d bloggers like this: