Archive

Posts Tagged ‘Open Internet’

Big New Global Threat to the Internet or Paper Tiger?, Part II: Is the Internet Telecoms?

This is the second in a series of posts that takes a critical look at claims that proposed changes to the international telecommunications regulations (ITRs) at the WCIT meeting later this year could see the ITU establish “international control over the internet”.

My previous post described some of the background to the issues, and three key claims that are being made: (1) the ITU currently has no role with respect to the Internet but is hell-bent on changing this at WCIT; (2) the ITU is a state-run telecom club; (3) that it is a Trojan Horse for a plot by authoritarian states and legacy telcos to impose a new Web 3.0 Model – Controlled National Internet–Media Spaces – over the open global internet.

I think the claims are overblown. I do not believe that the ITU is intending to, or capable of, taking over the internet.  I mostly agree with Milton Mueller that most of the changes being discussed are mainly about economics and interconnection rather than internet censorship and control. An article in the New York Times today expressed a similar view as well.

In contrast to Mueller, though, I think that the ITU already has a legitimate claim to having a say with respect to the internet, and more to the point, it has already been playing such a role through the last dozen years of active participation in the multi-stakeholder model of internet governance.

Mueller argues that the ITU’s most important efforts to stake a claim to the internet terrain — domain name system (1996), the two phases of WSIS (2002-2005), IP address management (2009-2010), suggestions for a UN Committee on Internet Related Policies (2011) – have all been mostly failures, not least because they have all been staunchly resisted by the U.S. government. As he says, the U.S. Government ”squashed” an early campaign by the ITU and ISOC to wrestle control of the international domain name system from the U.S. “like a bug”.

Two years later, ICANN – a California-based non-profit still dependent on the US government today and increasingly embroiled in high-stakes battles over copyright worldwide (i.e. MegaUpload, Rojadirecta) – was created. Mueller is happy about this state-of-affairs. I am less so, but am under no illusions that the best path to choose is obvious.

If Professor Mueller is right, however, we might not have to choose. The ITU has no jurisdiction over the internet, he argues, just telecommunications. According to him, this is because, beginning fifty years ago during the FCC’s Computer I, II and III inquiries (c. 1965-2002), the U.S. drew a clear, bright line between telecom-based services (pipes and carriage) and computer-based information services (content and the internet).

The Computer II rules formalized the distinction between “basic” telecoms and “enhanced” information services after protracted struggles over key questions about market concentration in the telecom and information industries as well as the range of services to be delivered by the market versus those considered public goods. Many argue that the new rules were wildly successful, not least in terms of fueling the growth of the Internet. I am inclined to agree but would ratchet down the superlatives, without losing focus on issues of market concentration and the public goods nature of telecom, media and internet goods.

The rules were never straight-forward, and have been mired in political and legal mud ever since their adoption. The Supreme Court’s Brand X ruling in 2005 re-affirmed the rule, but in doing so basically set the enhanced service designation up as a near insurmountable barrier to formal net neutrality rules that can be applied to all carriers and ISPs.

The problem that I see with this argument is three-fold. First, it takes U.S. law as the world’s law. U.S. telecom policy, however, is not global internet policy, nor should it be. Moreover, if the basic/enhanced dichotomy has been mired in controversy in the U.S. for a half-century, just imagine its fate at the global level.

Second, the U.S. can slice and dice the definition of telecoms any way it sees fit, but other countries do things differently, and the ITU defines telecommunication very broadly as: “Any transmission, emission or reception of signs, signals, writing, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems” (Constitution, Annex; ITR, Article 2.1). A plain reading of the definitions suggests that it includes the internet, which in fact is the view that the ITU and many of its member-states take.

Clearly, though, there is a debate over the scope of that definition and things will not be solved by recourse to formal definitions, however, but by the politics of language. Those opposed will stand firm against any formal references to the internet in the text of the ITRs, while those on the opposite side will pepper the rules with as many explicit references to the internet as possible. The fact that various members have proposed modifications or additions to at least a half-dozen sections of the ITRs that explicitly refer to the internet have brought these issues to a head.

The most important change to the ITRs is probably the proposal to include a reference to “internet traffic termination” to the existing definition of “International Telecommunications Services” in Article 2.2. Other proposed modifications refer to “VoIP” (Article 3.1, International Network), “Internet traffic and data transmission” as well as to “Internet and Internet Protocol” (Articles 4.2 and 4.3a, International Telecommunications Services, respectively).

Some proposals would also add new references to “international Internet connections” (Art. 3.7), the “internet” in a proposed new section 6.7 related to competition and interconnection issues and “measures to insure Internet stability and security” in 8.4.A (see Mueller on this point as well). References to  “cybercrime”, “data preservation, retention, protection”, “spam”, “identification”, “personal data protection” in new sections of Article 8 also have the internet clearly in their sights. I will examine some of the potential implications of these proposed changes and additions in more detail in the next two posts.

For now, however, my third argument is that things will not turn on the politics of language alone but the historical and contemporary practices of the ITU as well. In this regard, one things stands out that I think is determinative: the ITU has taken a broad, evolutionary view of its mandate and morphed with the times since its inception in 1865 (after the merger of two predecessor organizations — the Austro-German Telegraph Union (est. 1850) and the West European Telegraph Unioin (est. 1855), a point that will become important in the fourth post in this series) (see Drake, Introduction).

Originally called the International Telegraph Union, the ITU added telephones to its remit in the 1880s, radio in the early-1900s, and other new telecom technologies as they evolved. Its name was changed to the International Telecommunication Union in 1932 to reflect it broad and evolutionary view of the terrain. It’s Constitution, Decisions, Resolutions and Recommendations (DRRs) and the ITRs make a virtue out of the development and use of new telecom technologies, so it would be a real mystery to find a line drawn in the sand between telecoms before the internet and after, with the ITU confined strictly to the stuff that came in the past.

More recently, the ITU has been keen to carve out a distinct role for itself in regard to the internet since, at least, 1996, arguably earlier if we look back to the 1970s and 80s infatuation with ‘super-pipe’ models of integrated broadband media, even if the internet had not yet become a household name.  Its guts were nonetheless being put into place. And it is important to note that even the technical guts of the internet were not all made in America, as the paper by Google’s lawyers Patrick Ryan and Jacob Glick states. The UK, France and other parts of Europe were also involved, and the ITU was part of those efforts (Abbate, 1999; Mansell, 1993).

Yet, let’s take 1996 as the starting point because that is when the ITU and ISOC worked hand-in-glove in a bid to shift control over the domain name system from the U.S. to the ITU.  “The U.S. squashed that effort like a bug”, as Mueller states. Two years later, in 1998, the U.S. government created ICANN, where things have rested ever since.

Whereas Mueller sees just a long line of losses confirming that the ITU has no business in the internets of the world, I look past whether or not it has ‘won’ or ‘lost’ vis-a-vis the U.S. to see a long track record of practices that have evolved with the times. Thus, in the case of the internet, two years after the dispute over DNS, the ITU reaffirmed its commitment to cooperating with ISOC and IETF on global internet policy issues (DDR, Res. 102). It staked out matter-of-factly that it has a role to play “with regard to international public policy issues pertaining to the Internet and the management of Internet resources, including domain names and addresses” (DRR, Res. 102).

The two phases of WSIS between 2002 and 2005 also saw unprecedented participation by academics and civil society groups with the ITU in trying to imagine and map the frontiers of global internet policy. At the end of the three year process a new entity was born, the Internet Governance Forum (IGF), loosely under the direction of the United Nations, and with the ITU firmly within it alongside the rest of the ‘multi-stakeholder internet governance’ interests (ISOC, IETC, ICANN).

The IGF’s initial five year experimental period was renewed for another five years in 2010. All of this is important, too, because even if the ITRs do not currently refer to the internet, the ITU’s record of Decisions, Resolutions and Recommendations is chok-a-blok full of explicit and expansive references to the internet (see, for example, Resolutions 101, 133 & 179). Looking beyond the ITRs, therefore, we find a track-record of language on the internet that maps onto the ITU’s historical involvement with this domain since the late-1990s.

If the ITU has been such a loser with respect to global internet policy, and really has no place in it, as so many have argued (or just assumed) (Ryan & Glick; all but ISOC panellist Sally Wentworth at  U.S. congressional hearings on the so-called “International Proposals to Regulate the Internet” last month, etc.), it has been hiding in plain site. I think a better view of the matter is that, by dint of definition and a long history of evolution as well as contemporary practices, the ITU has a legitimate role to play in global internet policy.

Whether it exercises this role wisely or badly, however, is a different matter altogether, and which we will turn to in the next post.

Next Post: The ITU has been a business and market-dominated institution, not State-controlled, since the 1980s, maybe forever.

Big New Global Threat to the Internet or Paper Tiger?: the ITU and Global Internet Regulation, Part I

Over the past few weeks, a mounting number of commentators in the U.S. have pushed a supposed new threat to an open internet into the spotlight: the International Telecommunications Union (ITU).

According to those raising the alarms, preparations to revise the ITU’s international telecommunications regulations (ITRs) at a meeting this December are being hijacked by a motley assortment of authoritarian countries, legacy telecoms operators, as well as the BRIC (Brazil, Russia, India and China) and other developing countries. Their goal? To establish “international control over the internet”. Indeed, the issue is deemed so serious that congressional hearings on “International Proposals to Regulate the Internet” were held in the U.S. at the end of last month.

There seem to be three main claims behind the charge.

The first is that the ITU currently has jurisdiction over telecommunications, but not the Internet. As a paper by Patrick Ryan and Jacob Glick, two lawyers at Google, asserts, “modifications to the . . . ITRs are required before the ITU can become active in the Internet space”.  Vint Cerf, Google’s “chief internet evangelist”, similarly chastised the ITU’s “aims to expand its regulatory authority to the Internet” in an op-ed piece for the New York Times, and in front of the just-mentioned congressional hearings a week later.

Indeed, according to FCC Commissioner Robert McDowell, the idea that the ITU already has any role with respect to the internet is just nuts. Only pariah governments such as “Iran argue[] that the current definition already includes the Internet”, he asserts.

Milton Mueller more sensibly argues that the line between basic telecom and enhanced information services like the internet developed in the U.S. over the past half-century and subsequently trampolined onto the global stage during the 1990s leads to the same conclusion: so far as the ITU’s authority is concerned, basic telecoms are within its orbit, enhanced information services like the internet are out.

Indeed, as one of the critics leading the charge, Eli Dourado told me from his perch at the Mercatus Centre/Technoliberation Front/Cato Institute in a Twitter exchange the other day, nobody was thinking about the internet back in 1988 when the ITRs were last revised and updated. As a result, he says, “no internet traffic is governed under the original treaty.  Right now, 90% plus of global communications are not governed by the ITRs. This would change that”.

In sum, if the critics are right, the ITU’s gambit to draw the internet into its orbit would be a huge change from the status quo. But are they right? I do not think so and will come back to why further below after laying out the two other main criticisms.

The second key focus of critics is that the “ITU is a “closed organization” beholden to “state-run telecom monopolies”, as Ryan and Glick say. Fowler calls the proposed changes to the ITRs an attempt to impose “a top-down, centralized, international regulatory overlay [that] is antithetical to the architecture of the Net, which is a global network of networks without borders”.

According to this view, the ITU is a government dominated, telegraph-era dinosaur that is ill-suited for global internet policy, where markets, private actors and contracts and a variety of multi-stakeholder interests, including ISOC, ICANN, IETF, W3C, and other civil society groups work in ways that are open, consensus oriented, and inclusive.  The same point was made by David Gross, former Coordinator for International Communications and Information Policy, U.S. Department of State, and now head of the WCIT Ad Hoc Working Group made up of a whose who of telecom, media and internet titans: AT&T, Cisco, Comcast, Google, Intel, Microsoft, News Corp., Oracle, Telefonica, Time Warner Cable, Verisign and Verizon.

The secrecy and lack of transparency and civil society participation is the main concern of open internet advocacy stalwarts such as Public Knowledge, EFF, Centre for Democracy and Technology, and ISOC. A letter form CDT and thirty-two other internet advocacy groups calls on the ITU to “Remove restrictions on the sharing of WCIT documents and release all preparatory materials. . . . Open the preparatory process to meaningful participation by civil society . . .; and for Member States, open public processes at the national level to solicit input on proposed amendments to the International Telecommunications Regulations . . .”.

To help speed along this process, a new Wikileaks-style site, WCITLeaks.org, has also been set up to collect and make available documents leaked by ITU insiders, with some good results already in just the first few days.

The third argument is the “Trojan Horse” argument. From this angle, an ‘axis of evil’ authoritarian states – Russia, China, Iran, Saudi Arabia, Syria — are using the ITU as a vehicle to turn their closed models of national internet spaces into a global standard.  One paper after another points to a smoking gun that supposedly reveals the ITU’s end-game: a transcript of a conversation between the head of the ITU and Russian President, Vladimir Putin in which the latter waxes on about the need to establish “international control of the internet through the ITU”.

The model supposedly being ushered onto the world stage through the ITU is not the well-known Chinese model of internet filtering and website blocking, but a new “Trusted Web 3.O”. In the Web 3.0 model, authoritarian states use filtering and blocking techniques to deny access and (1) establish national laws that put such methods on a firm legal footing, (2) carve out a distinctive national internet-media space dominated by national champions (Baidu, Tencent, Yandex, Vkontakte) instead of Google, Facebook and Apple, within which (3) the state actively uses ‘internet-media-communication’ campaigns (propaganda) to shape the total information environment (See Deibert & Rohozinski, ch. 2).

Obviously, if the critics are right, there’s a lot more at stake in the WCIT than just bringing rules last revised in 1988 before the internet was even well-known up-to-date. There is, indeed, much at stake with the proposed revisions and much that is quite nasty within the rules themselves and how the ITU itself approaches global telecom and internet policy. Yet, as Mueller notes, while the critics’ focus on internet control and censorship by nasty governments might play well to their base, their claims are overblown and misrepresent the nature of the problems at hand. I agree with Mueller on this point, but also disagree with him on a few significant points as well, as we will see.

Over the next few posts I will offer, first, a post that lays out my criticism of the critics and, second, another that hones in on both proposed changes to the ITRs and elements that look like they will be retained and perhaps expanded on that I think are deeply problematic and genuinely a threat if not to the global internet, to the people living within countries whose practices would obtain the imprimatur of legitimacy from the ITU if they are accepted at the WCIT in December. Finally, I’ll offer an argument as to why the ITU should be reformed and retained rather than scrapped.

The crux of my criticisms are as follows: (1) that the ITU has always had a role with respect to the internet by dint of the expansive definition of telecommunications governing its operations; (2) that the battle over whether the ITU’s approach to global telecom and internet policy would be driven by the state or the market was settled decisively in favour of “the market” in the 1980s and 90s; (3) that while the ITU has a role in telecom and internet policy, its role has been increasingly neutered by the shift to the WTO and the ‘multi-stakeholder internet governance model’ since the 1990s; and finally (4) that the non-binding nature of ITU rules and principle of national sovereignty underpinning them means that the ‘axis of internet evil’ cannot use the ITU as a Trojan Horse to impose their Web 3.0 model on the rest of the world.

After I lay out these criticisms, in the next post I intend to dig deeper into the details of the ITU’s Constitution, Decisions, Resolutions, Recommendations as well as the ITRs and proposed changes to them. I will do so in order to reveal that, in fact, there are aspects of the ITUs global telecom and internet policy regime that are deeply problematic and, indeed, wholly unworthy of whatever legitimacy might be brought their way by being associated with the ITU and, by extension, the UN.

In this respect, I will hone in on: (1) how people’s right to communicate (Art. 33) clashes with rules that allow the state to cut off and/or intercept communication in cases that “appear dangerous to the security of the State or . . . to public order or to decency” (Arts. 34&37); (2) proposed changes to ITRs by the European Telecommunications Network Operators (ETNO) that legitimize the pay-per model of the internet and thus threaten network neutrality (Art. 3); (3) existing aspects of Article 8 of the ITRs and proposed changes relating to cybercrime, national security, whistle-blowing, user identities and anonymity that are odds with privacy norms outlined elsewhere in the ITU framework (e.g. Article 37 on the Secrecy of Telecommunications) and which put the interests of the state well above those of the individual.

Finally, I will make an argument as to why the ITU, after thorough-going reforms, is still a useful and desirable organization, building on the following arguments:

(1) it is already working within the ‘multi-stakeholder internet governance regime’ through the Internet Governance Forum established in 2005 and serious questions exist about U.S. hegemony over, in particular, ICANN (as illustrated by the U.S. government’s targeting of domain name resources to cripple Wikileaks, take-down foreign websites accused of violating U.S. copyright laws (see Rojadirecta case) and recent legislative proposals to formalize such tactics in SOPA);

(2) proposed changes adding elements of consumer protection with respect to mobile roaming charges and contracts as well as with respect to evaluating concentration in telecom and internet markets at the global and national level are worthwhile; and

(3) it’s broader remit reconciles global markets and technology on the one hand with broader norms related to the right to communicate, development and other important human rights and freedoms, on the other, that are entirely absent from the one-sided, market-driven model of globalization represented by the ITU’s closest counterpart, the WTO.

The Twitter – Wikileaks Cases and the Battle for the Network Free Press, Now its Personal: an Afternoon with Birgitta Jónsdóttir

A week-and-a-half ago I met up with Birgitta Jónsdóttir, an activist Icelandic MP and central figure in the Twitter-Wikileaks cases (see earlier posts on the topic hereherehere and here). Passing time on Twitter, I saw she was in Ottawa, sent her a tweet, quickly received a reply and presto, we met on a Sunday afternoon with a fellow professor from Ottawa University, Patrick McCurdy.

Jónsdóttir came to our attention after becoming a target of the U.S. Justice Department’s ongoing investigation of Wikileaks because of her role as co-producer of the video Collateral Murder. The video documents a U.S. Apache helicopter gunning down two Reuters journalists and several others in Baghdad and was released by whistle-blowing website Wikileaks in April 2010. It marked the beginning of the site’s campaign to release what would be the largest cache of US classified material the world has ever seen.

Over the course of 2010, Wikileaks teamed up with five of the world’s most respected news outlets — New York Times, The Guardian, Der SpeigelLe Monde and El Pais – to release material that wreaked havoc with the routine conventions of journalism and to set the global news agenda not once, but three more times: (2) the release of the Afghan and (3) Iraq war logs in July and October, respectively, and (4) a cache of diplomatic cables starting in late November.

The response from the U.S. Government was ferocious. The search to find the source of the leaks quickly led to the arrest of U.S. Army intelligence analyst, Bradley Manning, in May 2010, and his detention in solitary confinement ever since. Simultaneously, it began shaking down popular U.S. search and social media sites such as Twitter, Facebook, Skype and Google in a bid to gain access to information about people of interest in the Wikileaks investigation.

Birgitta is one of those people, along with Wikileaks front man Julian Assange, Tor developer, activist and Wikileaks volunteer, Jacob Applebaum, as well as Dutch hactivist Ron Gongrijp. Let’s call them the “Twitter –Wikileaks Four”.

Entering this murky world of state secrets, blacked out documents and unnamed internet companies cooperating with electronic surveillance efforts by the state offers a rude slap to anyone who sees the U.S. as a beacon of democracy, human rights and the free press. In fact, such values seem to have wilted with alarming ease in the face of the national security claims surrounding Wikileaks, and Birgitta Jónsdóttir specifically.

Talking to Jónsdóttir gave us a personal look behind the cool, technical view found in legal briefs and court rulings. And one of the first things she told us is that she no longer sets foot on U.S. soil on the advice of her lawyers and Iceland’s State Department, despite having diplomatic immunity on account of being a Member of Parliament in Iceland. Still embroiled in the Wikileaks cases, she has recently joined a lawsuit launched by Noam Chomsky, Noami Wolfe, Christopher Hedges, Daniel Ellsberg, and others to overturn the National Defense Authorization Act on the basis that its vague definition of terrorists threatens to sweep up dissidents into its maw, thereby threatening their ability to travel freely in the US and worldwide without fear of being arrested.

That we know much at all about how internet companies have been dragooned into the crackdown on Wikileaks is due to the fact that Birgitta, Applebaum and Gongrijp have led the fight with legal support from the American Civil Liberties Union and Electronic Frontier Foundation against such activities in the courts of law and public opinion (Assange has kept his focus elsewhere). And it is for this reason that The Guardian newspaper last month put Birgitta, Applebaum, Twitter’s chief lawyer, Alex MacGillivray, and Assange on its list of twenty “champions of the open internet”.

MacGillivray made the list primarily because only Twitter had the spine to challenge the Justice Department’s “secret orders” (not “court authorized warrants”), whereas all of the other search and social media companies rolled-over and shut-up. This was not just a one-time stance, either. This week Twitter was at it again, pushing to have a court order forcing it to hand-over information about an Occupy Wall Street activist to New York Police over-turned.

Twitter won a modest victory in January 2011 in the first Twitter – Wikileak case when it obtained a court order allowing it to tell Jónsdóttir and the others that the Justice Department was demanding information about their accounts as part of its Wikileaks investigation. The victory also opened a bigger opportunity to discover what other internet companies may have received the Justice Department’s secret orders.

Whatever hope was raised by the first Twitter – Wikileaks ruling was dashed by a District Court ruling in the second case last November, however. The ruling was blunt: users of corporate-owned, social media platforms have no privacy rights.

Using the same logic subsequently used in the “Occupy Wall Street” case, the court argued that Jónsdóttir et. al. had no privacy rights because Twitter, Skype, Facebook and Google’s business models are based on maximizing the collection and sale of subscriber information. Under such conditions, users alienate whatever privacy rights they might otherwise claim. As the ruling put it, Jónsdóttir and her co-defendants “voluntarily relinquished any reasonable expectation of privacy” by clicking on Twitter’s terms of service (p. 28).

With privacy reduced to the measuring rod of corporate business models and a perverse interpretation of its terms of service, Twitter was forced to hand over Jónsdóttir, Applebaum and Gongrijp’s account information to the Justice Department: registration pages, connection records, length of service, internet device number, and more.

A last ditch appeal was made by lawyers at the ACLU and EFF last January to reveal which other internet companies had received “secret orders” from the Justice Department. While no one knows for certain who they are, all eyes are on Google, Facebook and Skype (Microsoft). A decision is expected by the end of June, but Jónsdóttir isn’t holding her breath.

Pausing to reflect on the personal affects of the Twitter – Wikileaks cases overall, however, she remains upbeat rather than down-trodden.

“You have to completely alter your lifestyle. It’s not pleasant, but I don’t really care. . . . It’s just insults my sense of justice . . . . I would not put anything on social media sites that . . . I don’t want on the front pages of the press.”

Rather than dwelling on the costs to her personally, however, Jónsdóttir is quick to tie these events into a larger, more daunting picture. In doing so, she wants to prick the fantasy of Obama as a great liberal president and the illusion that the U.S. turned a corner after he replaced Bush as President.

As she reminds us, the Twitter – Wikileaks cases occurred on Obama’s watch. The Obama Administration has charged more whistle-blowers (six) than all past presidents combined (three), she offers (also here).

To this, we can add that revisions to the Foreign Intelligence Security Amendments Act in 2008 gave retroactive immunity to companies and ISPs such as AT&T and Verizon for the illegal network surveillance activities they conducted under the Bush regime, with few barriers now standing in the way of their continuing in that role under Obama (see here and here).

These concerns are crystallized in the latest Reporters Without Borders’ Press Freedom Index showing that press freedom in the U.S. plummeted from 20th to 47th place between 2010 and 2011. In short, the national security state after 9/11 has not been rolled back but kept intact. Jónsdóttir experience, she wants us to know, is not a fluke.

Glenn Greenwald has made a similar case that positions Wikileaks as being part and parcel of a new kind of journalism that mixes crowd sourcing, the internet and professional journalism. After a recent talk in Ottawa co-hosted by the National Press Club, he also mentioned that Wikileaks had invited journalists to use its material long before all hell broke loose in 2010, but it was the lure of exclusive access in their respective home markets that finally enticed The GuardianNew York TimesDer SpeigelLe Monde and El Pais to the table.

In other words, it was the pull of exclusive rights and private profit, not a good story, that brought the press to Wikileaks’ table, and it into the journalistic fold. And seen in that light, Wikileaks serves as a much-needed corrective to lazy and cautious journalism.

Harvard University law professor Yochai Benkler makes a similar case but in a much more systematic and constitutionally grounded way. He also shines a light on how the network free press is being subject to death by a thousand legal and extra-legal cuts when what we need is a strong press to counter the power of the strong state if democracy has a hope in hell of surviving, let alone thriving.

Benkler argues that attempts to bring Wikileaks to heal have involved a dangerous end run around Constitutional protections for the networked fourth estate, i.e. the First Amendment. Pressure from Senator Joe Lieberman, Chair of the Senate Committee on Homeland Security and Governmental Affairs, for instance, led webhosting provider Amazon, domain name supplier everyDNS and financial payment providers (Paypal, Visa, Mastercard) in December 2010 to withdraw internet and financial resources that are essential to Wikileaks’ operations to exemplify the point.

While government actors are prevented from such actions by First Amendment protections for the press, Lieberman used commercial actors who were, Twitter aside, all-too-willing to serve the state on bended knees, and a campaign to denigrate Wikileaks journalistic standing, to do an end run around such Constitutional restraints. Such actions eliminated the routine financial channels (Paypal, Visa, Mastercard) through which an estimated 90 percent of Wikileaks donor funding flows, and to scramble to find a new domain name provider and webhosting site.

Now of course, some argue that Wikileaks has nothing to do with journalism and the free press. They are wrong.

Remember, it set the global news agenda repeatedly in 2010, mostly by working hand-in-glove with the world’s leading newspapers to edit and publish stories. It has won oodles of journalist awards before and after these events, as the following partial list shows: Economist – Index on Censorship Freedom of Expression award 2008; Amnesty International human rights reporting award (New Media), UK 2010; Human Rights Film Festival of Barcelona Award for International Journalism & Human Rights, 2010; International Piero Passetti Journalism Prize of the National Union of Italian Journalists, Italy 2011; Voltaire Award of the Victorian Council for Civil Liberties, Australia 2011; Readers’ Choice in Time magazine’s Person of the Year (Julian Assange) 2011. The honorifics bestowed on the “Twitter Wikileaks Four” by The Guardian, also referred to earlier, adds yet another.

Awards are nice, and the recognition helps to bestow legitimacy, Jónsdóttir observes, but the real key is to keep pushing the envelope. To that end, she updated us on the Icelandic Modern Media Initiative (IMMI) that she and others have spearheaded since the initiative’s birth in the Icelandic Parliament in July 2010. IMMI is, in short, a “dream big” project designed to make Iceland a digital free media haven where whistle-blowers are protected by the highest legal standards in the world and the value Net Neutrality formally incorporated into the country’s new Constitution that now awaits Parliamentary ratification.

Thus, as she rails against powerful forces on the global stage, Jónsdóttir is helping to build in Iceland a model of information rights, privacy and free speech for the world.

These are important things, she says, because they are all about our history and about making democracy fit for our times. In terms of history, and reaching for the right words, she points to the importance of Wikileaks as

“part of the alchemy of what is going on in the world. . . . The Iraq and Afghan war logs changed how people talk about the wars. It has provided us with a very important part of our record, our history”.

As for democracy, “voting every four years is absolutely not democracy, it is just a transfer of power”, Jónsdóttir exclaims as our conversation draws to a close. Of course, the rule of law, an open internet, and fighting against the strong state are essential, but these are abstractions unless they are made personal and concrete.

Hmmm, the battle for the open internet and network free press, now its personal. That seems like a great way to think of Birgitta, and our long afternoon together last week.

Should ISPs Enforce Copyright? An Interview with Prof. Robin Mansell on the UK Case

Should Internet Service Providers (ISPs) be legally required to block access to websites that facilitate illegal downloading and file sharing sites or cut off the Internet connections of those who use such sites?

In Canada, the answer is no, and recently proposed legislation expected to be re-introduced soon, Bill C-32, the Copyright Modernization Actwould not change this state of affairs, despite all the other flaws that it might have (see here for an earlier post on the proposed new law).

That’s not the case in a growing number of countries, notably the United Kingdom, New Zealand, France, South Korea, Australia, Sweden and Taiwan. Indeed, after pushing hard for the past decade to get stronger, broader and longer copyright laws passed, as well as using digital rights management to lock content to specific devices, in 2008 the IFPI (International Federation of Phonographic Industries) and the RIAA (Recorded Industry Association of America) turned to giving first priority to the idea that ISPs should be legally required to block ‘rogue websites’ and adopt “three strikes you’re out measures” that cut off the accounts of Internet users accused repeatedly of illicitly downloading and sharing copyright protected content online.

While not formally required by law to do so, Canadian ISPs such as Bell, Rogers, Shaw, Cogeco, Telus, Quebecor, etc. have agreements with the recorded music industries and other “copyright industries” to disable access to illicit sites. Moreover, the Terms of Service/Acceptable Use Policies explicitly state that they reserve the right to do just this.

Exactly what the conditions are, and how often they are use, well, who knows? The arrangements, as I just said, are informal – something of a blackhole rather than an open Internet, essentially.

As Rogers Acceptable User Agreement explicitly states, for example:

“Rogers reserves the right to move, remove or refuse to post any content, in whole or in part, that it, in its sole discretion, decides   . . . violates the privacy rights or intellectual property rights of others” (“Unlawful and Inappropriate Content” clause”. (also see Bell’s Acceptable User Policy, p. 1)

So, it is not that Canada is some kind of “free Internet” zone, but rather one where there terms are set privately by ISPs (our major TMI conglomerates) and the “content industries”. This seems like a really bad idea to me.

The UK adopted an even worse approach, however, by giving such measures the force of law when it passed the Digital Economy Act in 2010, a law that was sped through Parliament in near-record time (i.e. 2 hours debate) after incredible levels of lobbying from the music, film and broadcasting industries (see here). Two major ISPs in the UK, however, BT and TalkTalk, have fought these measures tooth and nail, but have suffered a series of defeats in the courts.

I recently spoke with Professor Robin Mansell, who took part in these proceedings as an expert witness on behalf of BT and TalkTalk. Her experience sheds much light on the potential impact of these measures on the evolution of the Internet and Internet users. I also asked her about the tricky role of academics in such cases, given that being an expert witness essentially bars you from discussing details of the case, a position that obviously clashes with academics’ obligation to make knowledge public.

Professor Mansell is a Canadian who completed her Ph.D. at Simon Fraser University. She is a Professor of New Media and the Internet at the London School of Economics, where she was Head of the Media and Communications Department (2006-2009). She has been a leading contributor to policy debates about the Internet, the Information Society, and new information and communication technologies. She was also President of the International Association for Media and Communication Research (IAMCR) (2004-2008) and has served as a consultant to many UN agencies as well as the World Bank. You can learn more about her here.

Although the Court of Appeals rejected BT and TalkTalk’s challenge to the Digital Economy Act in June, several other developments in the UK since May have kept the issues on a high boil and still unresolved:

  1. The Hargreaves Report published in May was scathing of the lack of evidence underlying the development of copyright policies, and how “lobbynomics” rather than evidence has been driving the policy agenda (for an earlier blog post on the report, see here);
  2. Another High Court decision in July required BT and other ISPs to block access to the site Newzbin;
  3. The Government decided to adopt all of the proposals in the Hargreaves Report in August;
  4. The measures in the Digital Economy Act requiring ISPs to block illegal file-sharing sites were put on hold in August after a report by the British telecom and media regulator, Ofcom, found that the measures would be unworkable (also here).

Dwayne: How did you become an expert witness in the BT/TalkTalk challenge to the Digital Economy Act? And who was backing the adoption of these measures?

Professor Mansell: I was invited by BT’s Legal Division to do so.  They came to me on the recommendation of another academic who was serving as an advisor to the regulator, Ofcom, and so could not do it for conflict of interest reasons.  They also invited Prof. W. Edward Steinmueller, University of Sussex, to work with me, since he is formally trained as an economist and could take on the ‘copyright economist’ from the US who was expected to appear on behalf of the creative industry actors who have pushed so hard for the law.

The key players arrayed against BT and TalkTalk, in addition to the Government, included the following members of the ‘creative industries’: the British Recorded Music Industry Association, the British Video Association, the Broadcasting Entertainment Cinematograph and Theatre Union, Film Distributors Association, Footabll Assocation Premier League, Motion Picture Association, the Musicians Association, Producers Alliance for Cinema and Television and Unite. The Open Rights Group, somewhat similar to Open Media in Canada, also filed an intervention that, essentially, supported BT and TalkTalk’s position, but from a basis steeped more in open Internet values rather mainly business considerations.

I have training in economics, but no formal degree as mine are in Communication (Political Economy) and Social Psychology.  As far as we know we were the only academics hired by BT/TalkTalk to participate in the High Court Judicial Review of the Digital Economy Act (DEA) 2010.

We realised we would be bound by confidentiality once we signed on.  In the UK, our initial report challenging the measures set out in the Act came into the public domain after the judgement, but not the evidence submitted by the creative industry players against the BT/TalkTalk case or our rebuttal to that.

We had both worked and published on issues of copyright before and felt that there was a chance that the Judge might rescind the Act – a small one, but we thought it worth trying. This was the only way we could see that the provisions of the Act might be overturned since it had got on the books in the last days of previous Labour Government.

In the event, the Judge decided that the DEA should be implemented for two main reasons 1) there is no empirical evidence of what its impact will be from anyone’s perspective – just claims and counterclaims; 2) it is for Parliament to decide how copyright legislation balances the interests of the industry and of consumer/citizens, not for the courts.  BT/TalkTalk appealed the decision and lost again.

Dwayne: What implications does the most recent court set-back have for principles of open networks/network neutrality, copyright, privacy and user created content (UCC)?

Robin: The central issue in this case was whether the ‘graduated response’, or ‘three strikes you are out’, strategy being lobbied for by the creative industries to curtail online P2P file-sharing that infringes copyright is a disproportionate response to file-sharing practices that are ubiquitous.  Another issue was also whether the implementation of the measures by ISPs (with a court order) is likely to have a chilling effect on the way people use the Internet.

From the copyright industry point of view, the central issue was whether the government and ISPs would support their argument that this strategy is essential to their ability to stem the losses they are experiencing in the music, film and broadcast programming sectors which they attribute to infringing downloading by individual users – and more importantly to enable them to recover the lost revenues, or at least some of them. The creative industries players argued that it was essential for ISPs to play an active role in stemming the tide of copyright infringement.

The bigger issue of course is whether P2P file sharing is simply indicative of one of many ways in which Internet users are finding creative ways of producing and sharing online content in a ‘remix’ culture where the norms and standards for good behaviour online have changed enormously and with little evident regard amongst some Internet users for existing copyright provisions. In the face of these changes, the incumbent creative industry companies are seeking ways of extending their control over the use of copyrighted digital information in many ways, just one of which is stronger enforcement of copyright legislation which currently makes it illegal to copy even for non-commercial purposes of private use and creates a narrow window for licensing for educational use.

BT/TalkTalk framed the issues mainly in terms of the threat to their own business interests in terms of reputational and financial costs if they are required to divulge the names of their subscribers to the creative industry firms (albeit with a court order) when they are accused of infringing copyright.

We framed the issues in four ways:

  1. the disproportionality of the DEA response in light of changing social norms and behaviours online which means that there is little if any evidence that the threat of punishment will change online behaviour;
  2. the disproportionality of the response because it sets a wide net that is very likely to encompass those who use ISP subscribers’ access to the Internet (family, friends, users at work, in public places, etc.) for purposes of which the subscribers themselves have no knowledge;
  3. the lack of disinterested evidence on industry losses and revenue recovery since all the quantitative evidence is based on creative industry data or on studies which are flawed in terms of methodology; and
  4. the implications for trust and privacy when Internet users are being monitored for this purpose.

In this specific case, the arguments did not tip over into debates about network neutrality, but they easily could have. The techniques that are used to monitor subscriber online activity go in the direction of the same deep packet inspection techniques that also enable ISPs to discriminate among different types of Internet traffic.

However in this case, they were only being asked to provide subscriber information based on the monitoring performed by firms hired by the copyright industry firms themselves to monitor spikes in volume and the sites from which downloading occurs. This doesn’t go directly to what ISPs themselves are doing or not doing with respect to monitoring types of traffic, so technically isn’t about network neutrality. The ultimate effect, however, is not all that dissimilar.

Dwayne: You have mentioned for two years running now during talks at IAMCR that the role of ‘expert witness’ is a double-edged one, on the one hand allowing scholars a seat directly at the table while on the other hedging about the scholar’s role with all kinds of requirements about the nature of the facts and evidence that can be submitted, non-disclosure agreements, etc.

Can you elaborate a bit more on this conundrum? What would be your advice to those torn between the ‘expert witness’ and ‘activist’ scholar role?

Professor Mansell: This issue is always on my mind!  The role of an ‘expert witness’ in a court case can vary a lot depending on the jurisdiction. In the UK you can end up knowing quite a lot more as a result, but you also cannot write about it in an academic way because you cannot cite the sources which remain confidential even after the case is over. After the case is over of course you can argue as you wish retrospectively, but then ‘the horse has left the barn’.

Another issue is the problem of what counts as evidence.  The courts look for some kind of irrefutable quantitative evidence. Failing that they look for persuasive theoretical arguments about how the world ‘might be’, overlooking the unrealistic assumptions about how economic incentives work in the market or they look for generalisations from fairly flimsy empirical studies about what mainly US college student report about their own copyright infringing behaviour and future intentions.

The problem for the ‘expert witness’ is that while it is possible to refute the assumptions of theory and poorly conceived methodologies, it is not possible (usually) to present quantitative empirical evidence that is any more robust because it simply doesn’t exist.  It is possible to present good arguments (based on political economy, sociological or cultural analysis of changing norms, market structures and dominant interests, and power relations).  But if you know that the Judge is likely to be persuaded mainly by the economics arguments, one is not going to get very far.

Thus, the question arises as to why enter the fray in the first place? Why not work as an activist or work as an academic to influence the policy makers directly before the legislation gets on the books?

Both routes are needed, but time constraints often mean that they are hard to achieve in a consistent way.  And of course interacting continuously with policy makers raises its own challenges.  Not the least of these is that if they are setting the agenda and are already echoing the prevailing view that the balancing of interests in copyright protection is clear and unproblematic. It is a real uphill battle to depart from this view – and a strong likelihood that the door to the room or corridor where policy decisions are made will be shut.

In the case of copyright enforcement and the UK judicial review of the DEA, there are critical scholars in the community who could have been taken on by BT/TalkTalk and who are likely to have promoted the view that the whole of the copyright regime needs to be dismantled in favour of an open commons; they were not invited to participate by those setting the terms of engagement.

The Open Rights Group did participate in the judicial review as an intervener and their argument was quoted by the Judge, but this didn’t alter his view it seems.  In terms of the academic evidence, he basically said that this was a complex issue which should not be put before the courts.

Dwayne: The Court dismissed the challenge to the Digital Economy Act, finding that it was entirely within the purview of the UK Parliament to pass laws of this kind and to strike the balance between the competing interests in the way that it did. You described this as a total loss. Can you explain why and what the implications might be?

Professor Mansell: I think I said this because the Government claimed that the DEA is aimed at balancing legitimate uses of the Internet and freedom of expression against the costs of implementing technical sanctions against Internet users, assuming authorisation by the courts.

The Court accepted our argument about the ambiguity of the results of empirical studies of online user intentions and behaviours with respect to copyright infringement. It also accepted the argument that Internet users may take steps to avoid legal liability resulting in a chilling effect on the development of the Internet. But, it did not accept that such an effect would exceed the benefits of enhanced copyright protection.

Ultimately, it left it to Parliament to decide the appropriate weighing of the interests of the creative industries and Internet users, which the Government claims has already been done in the legislation.  So we go round and round …  the DEA enforcement legislation goes ahead and the copyright legislation it is designed to enforce stays in place – a ‘total loss’ (for now till the next round).

Meanwhile the creative industries as we know are experimenting with all sorts of new business models in their bid to change the way they raise revenues through the provision of digital content.  Perhaps the shear pressure of mass Internet user activity and infringing downloading will eventually give rise to fairer models – we can wait for this to happen, but it is a shame that the rights of these users are likely to be infringed and some will be punished for behaviour that one day may be seen as entirely appropriate and even welcomed!

We argued, that in light of uncertainty about the direction of change in social norms and behaviour online, legislation that seeks to suppress P2P file-sharing by bringing legal actions against individual infringers is likely to disrupt, or alter the course of, Internet development in ways that cannot be assumed to be benign. The evidence favours the interests of the rights holders and the interests of those engaging in infringing file-sharing are downplayed or excluded. This cannot be said to be a proportionate response to the incidence of infringing file-sharing.

Since the judicial review, an independent report commissioned by the Prime Minister (The Hargreaves Report) has emphasised the need for change favouring better access to orphaned works subject to copyright and copying for private and research purposes and greater emphasis on the impact of legislation on non-rights holders and consumers.  But, it still says that the DEA provisions for the ‘graduated response/three strikes you are out’ should go ahead until such time as there may be evidence that it is not working.  Again, the harms will already have occurred even if evidence shows that the measures are not working the way the industry claims they will and Internet users continue their infringing downloading activity.

Dwayne: Last question, Robin. Do you think that the recent moves by the UK government to adopt the Hargreaves Report in whole and to put aside ISP blocking requirements change the picture?

Professor Mansell: There is a difference between the provisions in the DEA to go after individual file sharers through the ‘Graduated Response’ tactic, which is going ahead, and the concerns expressed by ministers as to whether they can get ISPs to take down the big enabling sites.  My understanding is that is the issue under discussion.

Some of the other Hargreaves recommendations may well start to go ahead – we will see how quickly, but they do not go to the specific issue of using ISPs to help bring charges against individuals.  



Follow

Get every new post delivered to your Inbox.

Join 130 other followers

%d bloggers like this: