Posts Tagged ‘Open Internet’

David Wins Against Goliath: CRTC Bolsters “Net Neutrality”, Limits “Zero-Rating” & Strengthens Local TV

Today’s trilogy of CRTC decisions on “network neutrality”, local TV and simultaneous substitution are a huge win for Canadian citizens. They reinforce Canada’s network neutrality regime while backstopping local, over-the-air TV as a viable alternative to cable and satellite and as an important source of news and information.

Of the three decisions, the most important is probably the Mobile TV ruling. The decision responds to a complaint filed by Ben Klass with the Commission in late 2013 about Bell’s Mobile TV offering that allows Bell Mobility subscribers to access 10 hours of television programs for $5 per month while watching the same amount of TV on your wireless device from the CBC, YouTube or Netflix, for example, would cost up to $40 – an 800% difference. Klass’s complaint expanded in early 2014 after the Public Interest Advocacy Centre raised concerns about Rogers and Videotron’s Mobile TV services on much the same grounds. The CRTC then wrapped them into one proceeding. Today’s major decision supports Klass and PIAC’s claims.

In each case, watching television programs delivered over the internet on your mobile device from sources outside one of the carriers’ TV packages counted towards your data caps, while those inside their Mobile TV offerings did not.

Recognizing that they were likely fighting a losing battle, Rogers folded on the case last summer and Videotron began to phase out its preferentially priced Mobile TV service at the end of 2014. Bell soldiered on, however, claiming that despite being delivered over the internet and the same wireless networks as any other data, video, voice or internet services that subscribers might use, it’s Mobile TV service was not a telecom or internet service at all.

According to Bell, its Mobile TV service is a broadcasting service, and thus outside the reach of the charges that Klass and PIAC raised. Moreover, far from this being a bad thing, its Mobile TV service is making substantial contributions to the policy aims of the Broadcasting Act, Bell argued.

The CRTC’s decision resolutely rejects that claim. While the decision refers to Bell and Videotron’s Mobile TV services, since the latter has been phasing out the version of its service in question since the beginning of the year, the biggest impact of the decision will fall on Bell.

With respect to whether Mobile TV services are telecommunications or broadcast services, the Commission was crystal clear:

Bell Mobility and Videotron are . . . providing telecommunications services in regard to the transport of their mobile TV services to subscribers’ mobile devices, and are therefore subject to the Telecommunications Act (para 35).

In addition, the Commission is clear that far from being a good thing for Canadians and the aims of the Broadcasting Act, the services work a:

 . . . disadvantage to consumers in accessing other Canadian programs on their mobile devices, and . . . could not be said to further these [the Broadcasting Act] objectives (para 60).

Furthermore, Bell and Videotron’s claims about their Mobile TV services being good for Canadians lacked “quantifiable evidence to back the magnitude of those claims” (para 39).

Having found that the key issues revolved around telecommunications, the CRTC than turned to the heart of the matter: were the carriers giving their own Mobile TV services an advantage and, if so, was that advantage unreasonable? Again, the Commission is unequivocal. By charging one rate and exempting their own services from their data caps while charging much higher rates and applying data caps to all others, Bell and Videotron are giving themselves an unfair advantage.

Here’s the centerpiece of the decision in this regard:

Bell Mobility and Videotron, in providing the data connectivity and transport required for consumers to access the mobile TV services at substantially lower costs to those consumers relative to other audiovisual content services, have conferred upon consumers of their services, as well as upon their services, an undue and unreasonable preference, in violation of subsection 27(2) of the Telecommunications Act. In addition, they have subjected their subscribers who consume other audiovisual content services that are subject to data charges, and these other services, to an undue and unreasonable disadvantage, in violation of subsection 27(2) of the Telecommunications Act(Para 61).

Crucially, in making this decision, the CRTC saw the issues being raised by Klass and PIAC as something of a litmus test case, a test whose resolution would hold much in store for the evolution of the internet and the converging media ecology in the future. Again, as it says,

the preference given in relation to the transport of Bell Mobility’s and Videotron’s mobile TV services to subscribers’ mobile devices, and the corresponding disadvantage in relation to . . . other audiovisual content services available over the Internet, will grow and will have a material impact on consumers, and other audiovisual content services in particular. . . . [I]t may end up inhibiting the introduction and growth of other mobile TV services accessed over the Internet, which reduces innovation and consumer choice (para 58).

In short, the decision responds to current realities while looking to the future. It took the opportunity delivered up to it by a hard-working and careful student, Klass, and the additional effort by PIAC, to nip a problem in the bud. The fact that the issues raised complex issues today as well as for the years, even decades, ahead, also helps explain why this decision was more than a year in the making rather than the usual four months or so.

The Mobile TV decision effectively limits zero-rating in Canada, a practice where some internet content services pay to obtain fast lanes and exemption from carriers’ data caps. Doing so reinforces Canada’s strong “network neutrality” rules and places it shoulder-to-shoulder with other countries where zero-rating has been banned (e.g. Netherlands, Sweden, Chile) or discouraged and not practiced by wireless companies (e.g. Norway, Finland, Sweden, Estonia, Lithuania, Latvia, Malta and Iceland).

The upshot is an unambiguous win for strong “Network Neutrality/open internet” rules, including their unambiguous application to wireless internet access. As Blais put it, “the Mobile TV decision is all about Canadians having fair & equal access to content of their choice on internet. There will be no fast lanes & slow lanes”. It is about keeping control over what people access through the internet in their hands, not under the editorial control of ISPs and telecoms companies.

Three other things about the Mobile TV decision stand out.

First, it’s a message that Canada can send, with love, to the United States as the FCC gets set to decide in the next month on many of the same issues to replace its relatively weak ‘Open Internet’ principles that were tossed out by the courts in last year’s Verizon decision. With strong encouragement from Obama, the FCC is widely seen as leaning toward reinstating Title II common carrier classification for all broadband internet access providers – wireline, cable, wireless – and restricting zero-rating practices. This will reverse decisions taken in 2002, 2005 and 2007 under the Republican controlled FCC that redefined high speed internet access by cable, DSL and wireless as ‘information services’ and, thus, beyond the reach of the regulator.

Second, the CRTC decision rests entirely on the common carriage principle at the heart of the Telecommunications Act (namely sections 27 and, less so, 28) rather than its so-called network neutrality rules. This is a good thing because it returns the politics of the internet to sturdier ground, i.e. the centuries old and battle-tested grounds of common carriage versus the woollier notion of network neutrality.

Third, the concurring opinion of CRTC Commissioner Raj Shoan at the end of the decision is a must read. His ruminations on the ‘cone of silence’ around the issues raised by the Mobile TV proceeding reminds us that in an industry dominated by a handful of massive vertically-integrated companies who control access to distribution networks, content and audiences, a pervasive fear seems to have settled in amongst independent TV broadcasters, creators and others that appears to have kept them from stepping forward. It reminds us that the Canadian media industry is a tight and closed, if not so cozy, community where independent voices step forward at their own peril.

As Shoan observes, when “students, not-for-profits and charities have to contend against the deep pockets of large, national, vertically integrated entities in order to bring to light relevant issues of public interest without the support of affected parties (i.e. Canadian broadcasters)”, we are in trouble. The CRTC looked at that reality today squarely in the face and made three bold decisions that go someway to addressing the issues. We can be thankful to smart and interested citizens such as Ben Klass and public interest groups like PIAC for lighting the spark and all the hard work that led to today’s decision, and for groups such as Open Media for keeping these issues in the public eye. For all those who have stood as defenders of the status quo, indeed, often as their mouthpieces, it should be a message.

A few quick words on the other two decisions regarding keeping local television alive and simultaneous substitution.

First, Blais made it clear that maintaining local, over-the-air television is important to Canadians, as citizens, not just consumers. Why? Because that is where many of us still get a great deal of our news and information from. Blais did not mince words: the major TV companies have obtained enormous privileges, and it is time to meet their obligations. “An informed citizenry cannot be sacrificed on alter of corporate profits & debt reduction”, he intoned, in an implied reference to the steady flow of cut-backs and journalist lay-offs in an industry that has been allowed to bulk up through mergers and acquisitions on the promise that synergies would deliver benefits, not just to the corporate bottom line but to all Canadians. It’s time to deliver.

Local TV also needs to be kept alive because it provides a realistic alternative to cable, satellite and IPTV providers who have consistently raised prices far in excess of the rate of inflation. This is especially so because, as Greg Taylor, Steven May and others from Ryerson University, have made clear, Canada has recently completed the switch over to digital over-the-air television. The benefits of this now need to be nurtured rather than given a still birth by those whose loyalties are, at best, split between seeing things through versus protecting their cable, satellite and IPTV distribution networks, i.e. the same entities that own most of the local TV stations and the biggest cable, satellite and IPTV companies in Canada are one and the same: Bell, Shaw, Rogers and Quebecor. Brandishing updated bunny-ears as a prop, Blais encouraged Canadians to think about them as a viable option that was both free and of higher quality in terms of picture clarity.

There will be no new revenue stream from fee for carriage of local TV stations, a cornerstone of Bell’s submission to the Talk TV hearings. However, neither will one of the cornerstones that have supported the commercial viability of local TV since the 1970s be taken away: simultaneous substitution that allows Canadian broadcasters to substitute their commercials on US signals airing the same programs and carried by cable and satellite companies in Canada. The policy is a massive gift that delivers about a quarter-of-a billion dollars a year to the industry. This was a “big ask” for Bell, Shaw, Rogers and other television companies and, for all intents and purposes, they got it today from the CRTC, with the exception for the SuperBowl starting in 2016.

In sum, today’s CRTC decisions are bold. They send a clear message in support of an open internet, broadly interpreted to cover mobile wireless, cable and wireline networks. TV is not dead, and in fact, the evolution of the two are fundamentally intertwined, and need to be thought of as such. The CRTC’s decisions go a long way to doing just that. The decisions, in particular the open internet, Mobile TV and future of local TV parts, underscore the decisive role of independent voices, and the importance of listening to them, rather than just to incumbents and far too many scribes (but certainly not all) who think that relaying the views of rival media giants on a particular issue, and a financial analyst or two, to the Canadian public constitutes ‘balanced’ reporting.

Big New Global Threat to the Internet or Paper Tiger?, Part II: Is the Internet Telecoms?

This is the second in a series of posts that takes a critical look at claims that proposed changes to the international telecommunications regulations (ITRs) at the WCIT meeting later this year could see the ITU establish “international control over the internet”.

My previous post described some of the background to the issues, and three key claims that are being made: (1) the ITU currently has no role with respect to the Internet but is hell-bent on changing this at WCIT; (2) the ITU is a state-run telecom club; (3) that it is a Trojan Horse for a plot by authoritarian states and legacy telcos to impose a new Web 3.0 Model – Controlled National Internet–Media Spaces – over the open global internet.

I think the claims are overblown. I do not believe that the ITU is intending to, or capable of, taking over the internet.  I mostly agree with Milton Mueller that most of the changes being discussed are mainly about economics and interconnection rather than internet censorship and control. An article in the New York Times today expressed a similar view as well.

In contrast to Mueller, though, I think that the ITU already has a legitimate claim to having a say with respect to the internet, and more to the point, it has already been playing such a role through the last dozen years of active participation in the multi-stakeholder model of internet governance.

Mueller argues that the ITU’s most important efforts to stake a claim to the internet terrain — domain name system (1996), the two phases of WSIS (2002-2005), IP address management (2009-2010), suggestions for a UN Committee on Internet Related Policies (2011) – have all been mostly failures, not least because they have all been staunchly resisted by the U.S. government. As he says, the U.S. Government ”squashed” an early campaign by the ITU and ISOC to wrestle control of the international domain name system from the U.S. “like a bug”.

Two years later, ICANN – a California-based non-profit still dependent on the US government today and increasingly embroiled in high-stakes battles over copyright worldwide (i.e. MegaUpload, Rojadirecta) – was created. Mueller is happy about this state-of-affairs. I am less so, but am under no illusions that the best path to choose is obvious.

If Professor Mueller is right, however, we might not have to choose. The ITU has no jurisdiction over the internet, he argues, just telecommunications. According to him, this is because, beginning fifty years ago during the FCC’s Computer I, II and III inquiries (c. 1965-2002), the U.S. drew a clear, bright line between telecom-based services (pipes and carriage) and computer-based information services (content and the internet).

The Computer II rules formalized the distinction between “basic” telecoms and “enhanced” information services after protracted struggles over key questions about market concentration in the telecom and information industries as well as the range of services to be delivered by the market versus those considered public goods. Many argue that the new rules were wildly successful, not least in terms of fueling the growth of the Internet. I am inclined to agree but would ratchet down the superlatives, without losing focus on issues of market concentration and the public goods nature of telecom, media and internet goods.

The rules were never straight-forward, and have been mired in political and legal mud ever since their adoption. The Supreme Court’s Brand X ruling in 2005 re-affirmed the rule, but in doing so basically set the enhanced service designation up as a near insurmountable barrier to formal net neutrality rules that can be applied to all carriers and ISPs.

The problem that I see with this argument is three-fold. First, it takes U.S. law as the world’s law. U.S. telecom policy, however, is not global internet policy, nor should it be. Moreover, if the basic/enhanced dichotomy has been mired in controversy in the U.S. for a half-century, just imagine its fate at the global level.

Second, the U.S. can slice and dice the definition of telecoms any way it sees fit, but other countries do things differently, and the ITU defines telecommunication very broadly as: “Any transmission, emission or reception of signs, signals, writing, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems” (Constitution, Annex; ITR, Article 2.1). A plain reading of the definitions suggests that it includes the internet, which in fact is the view that the ITU and many of its member-states take.

Clearly, though, there is a debate over the scope of that definition and things will not be solved by recourse to formal definitions, however, but by the politics of language. Those opposed will stand firm against any formal references to the internet in the text of the ITRs, while those on the opposite side will pepper the rules with as many explicit references to the internet as possible. The fact that various members have proposed modifications or additions to at least a half-dozen sections of the ITRs that explicitly refer to the internet have brought these issues to a head.

The most important change to the ITRs is probably the proposal to include a reference to “internet traffic termination” to the existing definition of “International Telecommunications Services” in Article 2.2. Other proposed modifications refer to “VoIP” (Article 3.1, International Network), “Internet traffic and data transmission” as well as to “Internet and Internet Protocol” (Articles 4.2 and 4.3a, International Telecommunications Services, respectively).

Some proposals would also add new references to “international Internet connections” (Art. 3.7), the “internet” in a proposed new section 6.7 related to competition and interconnection issues and “measures to insure Internet stability and security” in 8.4.A (see Mueller on this point as well). References to  “cybercrime”, “data preservation, retention, protection”, “spam”, “identification”, “personal data protection” in new sections of Article 8 also have the internet clearly in their sights. I will examine some of the potential implications of these proposed changes and additions in more detail in the next two posts.

For now, however, my third argument is that things will not turn on the politics of language alone but the historical and contemporary practices of the ITU as well. In this regard, one things stands out that I think is determinative: the ITU has taken a broad, evolutionary view of its mandate and morphed with the times since its inception in 1865 (after the merger of two predecessor organizations — the Austro-German Telegraph Union (est. 1850) and the West European Telegraph Unioin (est. 1855), a point that will become important in the fourth post in this series) (see Drake, Introduction).

Originally called the International Telegraph Union, the ITU added telephones to its remit in the 1880s, radio in the early-1900s, and other new telecom technologies as they evolved. Its name was changed to the International Telecommunication Union in 1932 to reflect it broad and evolutionary view of the terrain. It’s Constitution, Decisions, Resolutions and Recommendations (DRRs) and the ITRs make a virtue out of the development and use of new telecom technologies, so it would be a real mystery to find a line drawn in the sand between telecoms before the internet and after, with the ITU confined strictly to the stuff that came in the past.

More recently, the ITU has been keen to carve out a distinct role for itself in regard to the internet since, at least, 1996, arguably earlier if we look back to the 1970s and 80s infatuation with ‘super-pipe’ models of integrated broadband media, even if the internet had not yet become a household name.  Its guts were nonetheless being put into place. And it is important to note that even the technical guts of the internet were not all made in America, as the paper by Google’s lawyers Patrick Ryan and Jacob Glick states. The UK, France and other parts of Europe were also involved, and the ITU was part of those efforts (Abbate, 1999; Mansell, 1993).

Yet, let’s take 1996 as the starting point because that is when the ITU and ISOC worked hand-in-glove in a bid to shift control over the domain name system from the U.S. to the ITU.  “The U.S. squashed that effort like a bug”, as Mueller states. Two years later, in 1998, the U.S. government created ICANN, where things have rested ever since.

Whereas Mueller sees just a long line of losses confirming that the ITU has no business in the internets of the world, I look past whether or not it has ‘won’ or ‘lost’ vis-a-vis the U.S. to see a long track record of practices that have evolved with the times. Thus, in the case of the internet, two years after the dispute over DNS, the ITU reaffirmed its commitment to cooperating with ISOC and IETF on global internet policy issues (DDR, Res. 102). It staked out matter-of-factly that it has a role to play “with regard to international public policy issues pertaining to the Internet and the management of Internet resources, including domain names and addresses” (DRR, Res. 102).

The two phases of WSIS between 2002 and 2005 also saw unprecedented participation by academics and civil society groups with the ITU in trying to imagine and map the frontiers of global internet policy. At the end of the three year process a new entity was born, the Internet Governance Forum (IGF), loosely under the direction of the United Nations, and with the ITU firmly within it alongside the rest of the ‘multi-stakeholder internet governance’ interests (ISOC, IETC, ICANN).

The IGF’s initial five year experimental period was renewed for another five years in 2010. All of this is important, too, because even if the ITRs do not currently refer to the internet, the ITU’s record of Decisions, Resolutions and Recommendations is chok-a-blok full of explicit and expansive references to the internet (see, for example, Resolutions 101, 133 & 179). Looking beyond the ITRs, therefore, we find a track-record of language on the internet that maps onto the ITU’s historical involvement with this domain since the late-1990s.

If the ITU has been such a loser with respect to global internet policy, and really has no place in it, as so many have argued (or just assumed) (Ryan & Glick; all but ISOC panellist Sally Wentworth at  U.S. congressional hearings on the so-called “International Proposals to Regulate the Internet” last month, etc.), it has been hiding in plain site. I think a better view of the matter is that, by dint of definition and a long history of evolution as well as contemporary practices, the ITU has a legitimate role to play in global internet policy.

Whether it exercises this role wisely or badly, however, is a different matter altogether, and which we will turn to in the next post.

Next Post: The ITU has been a business and market-dominated institution, not State-controlled, since the 1980s, maybe forever.

Big New Global Threat to the Internet or Paper Tiger?: the ITU and Global Internet Regulation, Part I

Over the past few weeks, a mounting number of commentators in the U.S. have pushed a supposed new threat to an open internet into the spotlight: the International Telecommunications Union (ITU).

According to those raising the alarms, preparations to revise the ITU’s international telecommunications regulations (ITRs) at a meeting this December are being hijacked by a motley assortment of authoritarian countries, legacy telecoms operators, as well as the BRIC (Brazil, Russia, India and China) and other developing countries. Their goal? To establish “international control over the internet”. Indeed, the issue is deemed so serious that congressional hearings on “International Proposals to Regulate the Internet” were held in the U.S. at the end of last month.

There seem to be three main claims behind the charge.

The first is that the ITU currently has jurisdiction over telecommunications, but not the Internet. As a paper by Patrick Ryan and Jacob Glick, two lawyers at Google, asserts, “modifications to the . . . ITRs are required before the ITU can become active in the Internet space”.  Vint Cerf, Google’s “chief internet evangelist”, similarly chastised the ITU’s “aims to expand its regulatory authority to the Internet” in an op-ed piece for the New York Times, and in front of the just-mentioned congressional hearings a week later.

Indeed, according to FCC Commissioner Robert McDowell, the idea that the ITU already has any role with respect to the internet is just nuts. Only pariah governments such as “Iran argue[] that the current definition already includes the Internet”, he asserts.

Milton Mueller more sensibly argues that the line between basic telecom and enhanced information services like the internet developed in the U.S. over the past half-century and subsequently trampolined onto the global stage during the 1990s leads to the same conclusion: so far as the ITU’s authority is concerned, basic telecoms are within its orbit, enhanced information services like the internet are out.

Indeed, as one of the critics leading the charge, Eli Dourado told me from his perch at the Mercatus Centre/Technoliberation Front/Cato Institute in a Twitter exchange the other day, nobody was thinking about the internet back in 1988 when the ITRs were last revised and updated. As a result, he says, “no internet traffic is governed under the original treaty.  Right now, 90% plus of global communications are not governed by the ITRs. This would change that”.

In sum, if the critics are right, the ITU’s gambit to draw the internet into its orbit would be a huge change from the status quo. But are they right? I do not think so and will come back to why further below after laying out the two other main criticisms.

The second key focus of critics is that the “ITU is a “closed organization” beholden to “state-run telecom monopolies”, as Ryan and Glick say. Fowler calls the proposed changes to the ITRs an attempt to impose “a top-down, centralized, international regulatory overlay [that] is antithetical to the architecture of the Net, which is a global network of networks without borders”.

According to this view, the ITU is a government dominated, telegraph-era dinosaur that is ill-suited for global internet policy, where markets, private actors and contracts and a variety of multi-stakeholder interests, including ISOC, ICANN, IETF, W3C, and other civil society groups work in ways that are open, consensus oriented, and inclusive.  The same point was made by David Gross, former Coordinator for International Communications and Information Policy, U.S. Department of State, and now head of the WCIT Ad Hoc Working Group made up of a whose who of telecom, media and internet titans: AT&T, Cisco, Comcast, Google, Intel, Microsoft, News Corp., Oracle, Telefonica, Time Warner Cable, Verisign and Verizon.

The secrecy and lack of transparency and civil society participation is the main concern of open internet advocacy stalwarts such as Public Knowledge, EFF, Centre for Democracy and Technology, and ISOC. A letter form CDT and thirty-two other internet advocacy groups calls on the ITU to “Remove restrictions on the sharing of WCIT documents and release all preparatory materials. . . . Open the preparatory process to meaningful participation by civil society . . .; and for Member States, open public processes at the national level to solicit input on proposed amendments to the International Telecommunications Regulations . . .”.

To help speed along this process, a new Wikileaks-style site,, has also been set up to collect and make available documents leaked by ITU insiders, with some good results already in just the first few days.

The third argument is the “Trojan Horse” argument. From this angle, an ‘axis of evil’ authoritarian states – Russia, China, Iran, Saudi Arabia, Syria — are using the ITU as a vehicle to turn their closed models of national internet spaces into a global standard.  One paper after another points to a smoking gun that supposedly reveals the ITU’s end-game: a transcript of a conversation between the head of the ITU and Russian President, Vladimir Putin in which the latter waxes on about the need to establish “international control of the internet through the ITU”.

The model supposedly being ushered onto the world stage through the ITU is not the well-known Chinese model of internet filtering and website blocking, but a new “Trusted Web 3.O”. In the Web 3.0 model, authoritarian states use filtering and blocking techniques to deny access and (1) establish national laws that put such methods on a firm legal footing, (2) carve out a distinctive national internet-media space dominated by national champions (Baidu, Tencent, Yandex, Vkontakte) instead of Google, Facebook and Apple, within which (3) the state actively uses ‘internet-media-communication’ campaigns (propaganda) to shape the total information environment (See Deibert & Rohozinski, ch. 2).

Obviously, if the critics are right, there’s a lot more at stake in the WCIT than just bringing rules last revised in 1988 before the internet was even well-known up-to-date. There is, indeed, much at stake with the proposed revisions and much that is quite nasty within the rules themselves and how the ITU itself approaches global telecom and internet policy. Yet, as Mueller notes, while the critics’ focus on internet control and censorship by nasty governments might play well to their base, their claims are overblown and misrepresent the nature of the problems at hand. I agree with Mueller on this point, but also disagree with him on a few significant points as well, as we will see.

Over the next few posts I will offer, first, a post that lays out my criticism of the critics and, second, another that hones in on both proposed changes to the ITRs and elements that look like they will be retained and perhaps expanded on that I think are deeply problematic and genuinely a threat if not to the global internet, to the people living within countries whose practices would obtain the imprimatur of legitimacy from the ITU if they are accepted at the WCIT in December. Finally, I’ll offer an argument as to why the ITU should be reformed and retained rather than scrapped.

The crux of my criticisms are as follows: (1) that the ITU has always had a role with respect to the internet by dint of the expansive definition of telecommunications governing its operations; (2) that the battle over whether the ITU’s approach to global telecom and internet policy would be driven by the state or the market was settled decisively in favour of “the market” in the 1980s and 90s; (3) that while the ITU has a role in telecom and internet policy, its role has been increasingly neutered by the shift to the WTO and the ‘multi-stakeholder internet governance model’ since the 1990s; and finally (4) that the non-binding nature of ITU rules and principle of national sovereignty underpinning them means that the ‘axis of internet evil’ cannot use the ITU as a Trojan Horse to impose their Web 3.0 model on the rest of the world.

After I lay out these criticisms, in the next post I intend to dig deeper into the details of the ITU’s Constitution, Decisions, Resolutions, Recommendations as well as the ITRs and proposed changes to them. I will do so in order to reveal that, in fact, there are aspects of the ITUs global telecom and internet policy regime that are deeply problematic and, indeed, wholly unworthy of whatever legitimacy might be brought their way by being associated with the ITU and, by extension, the UN.

In this respect, I will hone in on: (1) how people’s right to communicate (Art. 33) clashes with rules that allow the state to cut off and/or intercept communication in cases that “appear dangerous to the security of the State or . . . to public order or to decency” (Arts. 34&37); (2) proposed changes to ITRs by the European Telecommunications Network Operators (ETNO) that legitimize the pay-per model of the internet and thus threaten network neutrality (Art. 3); (3) existing aspects of Article 8 of the ITRs and proposed changes relating to cybercrime, national security, whistle-blowing, user identities and anonymity that are odds with privacy norms outlined elsewhere in the ITU framework (e.g. Article 37 on the Secrecy of Telecommunications) and which put the interests of the state well above those of the individual.

Finally, I will make an argument as to why the ITU, after thorough-going reforms, is still a useful and desirable organization, building on the following arguments:

(1) it is already working within the ‘multi-stakeholder internet governance regime’ through the Internet Governance Forum established in 2005 and serious questions exist about U.S. hegemony over, in particular, ICANN (as illustrated by the U.S. government’s targeting of domain name resources to cripple Wikileaks, take-down foreign websites accused of violating U.S. copyright laws (see Rojadirecta case) and recent legislative proposals to formalize such tactics in SOPA);

(2) proposed changes adding elements of consumer protection with respect to mobile roaming charges and contracts as well as with respect to evaluating concentration in telecom and internet markets at the global and national level are worthwhile; and

(3) it’s broader remit reconciles global markets and technology on the one hand with broader norms related to the right to communicate, development and other important human rights and freedoms, on the other, that are entirely absent from the one-sided, market-driven model of globalization represented by the ITU’s closest counterpart, the WTO.

The Twitter – Wikileaks Cases and the Battle for the Network Free Press, Now its Personal: an Afternoon with Birgitta Jónsdóttir

A week-and-a-half ago I met up with Birgitta Jónsdóttir, an activist Icelandic MP and central figure in the Twitter-Wikileaks cases (see earlier posts on the topic hereherehere and here). Passing time on Twitter, I saw she was in Ottawa, sent her a tweet, quickly received a reply and presto, we met on a Sunday afternoon with a fellow professor from Ottawa University, Patrick McCurdy.

Jónsdóttir came to our attention after becoming a target of the U.S. Justice Department’s ongoing investigation of Wikileaks because of her role as co-producer of the video Collateral Murder. The video documents a U.S. Apache helicopter gunning down two Reuters journalists and several others in Baghdad and was released by whistle-blowing website Wikileaks in April 2010. It marked the beginning of the site’s campaign to release what would be the largest cache of US classified material the world has ever seen.

Over the course of 2010, Wikileaks teamed up with five of the world’s most respected news outlets — New York Times, The Guardian, Der SpeigelLe Monde and El Pais – to release material that wreaked havoc with the routine conventions of journalism and to set the global news agenda not once, but three more times: (2) the release of the Afghan and (3) Iraq war logs in July and October, respectively, and (4) a cache of diplomatic cables starting in late November.

The response from the U.S. Government was ferocious. The search to find the source of the leaks quickly led to the arrest of U.S. Army intelligence analyst, Bradley Manning, in May 2010, and his detention in solitary confinement ever since. Simultaneously, it began shaking down popular U.S. search and social media sites such as Twitter, Facebook, Skype and Google in a bid to gain access to information about people of interest in the Wikileaks investigation.

Birgitta is one of those people, along with Wikileaks front man Julian Assange, Tor developer, activist and Wikileaks volunteer, Jacob Applebaum, as well as Dutch hactivist Ron Gongrijp. Let’s call them the “Twitter –Wikileaks Four”.

Entering this murky world of state secrets, blacked out documents and unnamed internet companies cooperating with electronic surveillance efforts by the state offers a rude slap to anyone who sees the U.S. as a beacon of democracy, human rights and the free press. In fact, such values seem to have wilted with alarming ease in the face of the national security claims surrounding Wikileaks, and Birgitta Jónsdóttir specifically.

Talking to Jónsdóttir gave us a personal look behind the cool, technical view found in legal briefs and court rulings. And one of the first things she told us is that she no longer sets foot on U.S. soil on the advice of her lawyers and Iceland’s State Department, despite having diplomatic immunity on account of being a Member of Parliament in Iceland. Still embroiled in the Wikileaks cases, she has recently joined a lawsuit launched by Noam Chomsky, Noami Wolfe, Christopher Hedges, Daniel Ellsberg, and others to overturn the National Defense Authorization Act on the basis that its vague definition of terrorists threatens to sweep up dissidents into its maw, thereby threatening their ability to travel freely in the US and worldwide without fear of being arrested.

That we know much at all about how internet companies have been dragooned into the crackdown on Wikileaks is due to the fact that Birgitta, Applebaum and Gongrijp have led the fight with legal support from the American Civil Liberties Union and Electronic Frontier Foundation against such activities in the courts of law and public opinion (Assange has kept his focus elsewhere). And it is for this reason that The Guardian newspaper last month put Birgitta, Applebaum, Twitter’s chief lawyer, Alex MacGillivray, and Assange on its list of twenty “champions of the open internet”.

MacGillivray made the list primarily because only Twitter had the spine to challenge the Justice Department’s “secret orders” (not “court authorized warrants”), whereas all of the other search and social media companies rolled-over and shut-up. This was not just a one-time stance, either. This week Twitter was at it again, pushing to have a court order forcing it to hand-over information about an Occupy Wall Street activist to New York Police over-turned.

Twitter won a modest victory in January 2011 in the first Twitter – Wikileak case when it obtained a court order allowing it to tell Jónsdóttir and the others that the Justice Department was demanding information about their accounts as part of its Wikileaks investigation. The victory also opened a bigger opportunity to discover what other internet companies may have received the Justice Department’s secret orders.

Whatever hope was raised by the first Twitter – Wikileaks ruling was dashed by a District Court ruling in the second case last November, however. The ruling was blunt: users of corporate-owned, social media platforms have no privacy rights.

Using the same logic subsequently used in the “Occupy Wall Street” case, the court argued that Jónsdóttir et. al. had no privacy rights because Twitter, Skype, Facebook and Google’s business models are based on maximizing the collection and sale of subscriber information. Under such conditions, users alienate whatever privacy rights they might otherwise claim. As the ruling put it, Jónsdóttir and her co-defendants “voluntarily relinquished any reasonable expectation of privacy” by clicking on Twitter’s terms of service (p. 28).

With privacy reduced to the measuring rod of corporate business models and a perverse interpretation of its terms of service, Twitter was forced to hand over Jónsdóttir, Applebaum and Gongrijp’s account information to the Justice Department: registration pages, connection records, length of service, internet device number, and more.

A last ditch appeal was made by lawyers at the ACLU and EFF last January to reveal which other internet companies had received “secret orders” from the Justice Department. While no one knows for certain who they are, all eyes are on Google, Facebook and Skype (Microsoft). A decision is expected by the end of June, but Jónsdóttir isn’t holding her breath.

Pausing to reflect on the personal affects of the Twitter – Wikileaks cases overall, however, she remains upbeat rather than down-trodden.

“You have to completely alter your lifestyle. It’s not pleasant, but I don’t really care. . . . It’s just insults my sense of justice . . . . I would not put anything on social media sites that . . . I don’t want on the front pages of the press.”

Rather than dwelling on the costs to her personally, however, Jónsdóttir is quick to tie these events into a larger, more daunting picture. In doing so, she wants to prick the fantasy of Obama as a great liberal president and the illusion that the U.S. turned a corner after he replaced Bush as President.

As she reminds us, the Twitter – Wikileaks cases occurred on Obama’s watch. The Obama Administration has charged more whistle-blowers (six) than all past presidents combined (three), she offers (also here).

To this, we can add that revisions to the Foreign Intelligence Security Amendments Act in 2008 gave retroactive immunity to companies and ISPs such as AT&T and Verizon for the illegal network surveillance activities they conducted under the Bush regime, with few barriers now standing in the way of their continuing in that role under Obama (see here and here).

These concerns are crystallized in the latest Reporters Without Borders’ Press Freedom Index showing that press freedom in the U.S. plummeted from 20th to 47th place between 2010 and 2011. In short, the national security state after 9/11 has not been rolled back but kept intact. Jónsdóttir experience, she wants us to know, is not a fluke.

Glenn Greenwald has made a similar case that positions Wikileaks as being part and parcel of a new kind of journalism that mixes crowd sourcing, the internet and professional journalism. After a recent talk in Ottawa co-hosted by the National Press Club, he also mentioned that Wikileaks had invited journalists to use its material long before all hell broke loose in 2010, but it was the lure of exclusive access in their respective home markets that finally enticed The GuardianNew York TimesDer SpeigelLe Monde and El Pais to the table.

In other words, it was the pull of exclusive rights and private profit, not a good story, that brought the press to Wikileaks’ table, and it into the journalistic fold. And seen in that light, Wikileaks serves as a much-needed corrective to lazy and cautious journalism.

Harvard University law professor Yochai Benkler makes a similar case but in a much more systematic and constitutionally grounded way. He also shines a light on how the network free press is being subject to death by a thousand legal and extra-legal cuts when what we need is a strong press to counter the power of the strong state if democracy has a hope in hell of surviving, let alone thriving.

Benkler argues that attempts to bring Wikileaks to heal have involved a dangerous end run around Constitutional protections for the networked fourth estate, i.e. the First Amendment. Pressure from Senator Joe Lieberman, Chair of the Senate Committee on Homeland Security and Governmental Affairs, for instance, led webhosting provider Amazon, domain name supplier everyDNS and financial payment providers (Paypal, Visa, Mastercard) in December 2010 to withdraw internet and financial resources that are essential to Wikileaks’ operations to exemplify the point.

While government actors are prevented from such actions by First Amendment protections for the press, Lieberman used commercial actors who were, Twitter aside, all-too-willing to serve the state on bended knees, and a campaign to denigrate Wikileaks journalistic standing, to do an end run around such Constitutional restraints. Such actions eliminated the routine financial channels (Paypal, Visa, Mastercard) through which an estimated 90 percent of Wikileaks donor funding flows, and to scramble to find a new domain name provider and webhosting site.

Now of course, some argue that Wikileaks has nothing to do with journalism and the free press. They are wrong.

Remember, it set the global news agenda repeatedly in 2010, mostly by working hand-in-glove with the world’s leading newspapers to edit and publish stories. It has won oodles of journalist awards before and after these events, as the following partial list shows: Economist – Index on Censorship Freedom of Expression award 2008; Amnesty International human rights reporting award (New Media), UK 2010; Human Rights Film Festival of Barcelona Award for International Journalism & Human Rights, 2010; International Piero Passetti Journalism Prize of the National Union of Italian Journalists, Italy 2011; Voltaire Award of the Victorian Council for Civil Liberties, Australia 2011; Readers’ Choice in Time magazine’s Person of the Year (Julian Assange) 2011. The honorifics bestowed on the “Twitter Wikileaks Four” by The Guardian, also referred to earlier, adds yet another.

Awards are nice, and the recognition helps to bestow legitimacy, Jónsdóttir observes, but the real key is to keep pushing the envelope. To that end, she updated us on the Icelandic Modern Media Initiative (IMMI) that she and others have spearheaded since the initiative’s birth in the Icelandic Parliament in July 2010. IMMI is, in short, a “dream big” project designed to make Iceland a digital free media haven where whistle-blowers are protected by the highest legal standards in the world and the value Net Neutrality formally incorporated into the country’s new Constitution that now awaits Parliamentary ratification.

Thus, as she rails against powerful forces on the global stage, Jónsdóttir is helping to build in Iceland a model of information rights, privacy and free speech for the world.

These are important things, she says, because they are all about our history and about making democracy fit for our times. In terms of history, and reaching for the right words, she points to the importance of Wikileaks as

“part of the alchemy of what is going on in the world. . . . The Iraq and Afghan war logs changed how people talk about the wars. It has provided us with a very important part of our record, our history”.

As for democracy, “voting every four years is absolutely not democracy, it is just a transfer of power”, Jónsdóttir exclaims as our conversation draws to a close. Of course, the rule of law, an open internet, and fighting against the strong state are essential, but these are abstractions unless they are made personal and concrete.

Hmmm, the battle for the open internet and network free press, now its personal. That seems like a great way to think of Birgitta, and our long afternoon together last week.

Should ISPs Enforce Copyright? An Interview with Prof. Robin Mansell on the UK Case

Should Internet Service Providers (ISPs) be legally required to block access to websites that facilitate illegal downloading and file sharing sites or cut off the Internet connections of those who use such sites?

In Canada, the answer is no, and recently proposed legislation expected to be re-introduced soon, Bill C-32, the Copyright Modernization Actwould not change this state of affairs, despite all the other flaws that it might have (see here for an earlier post on the proposed new law).

That’s not the case in a growing number of countries, notably the United Kingdom, New Zealand, France, South Korea, Australia, Sweden and Taiwan. Indeed, after pushing hard for the past decade to get stronger, broader and longer copyright laws passed, as well as using digital rights management to lock content to specific devices, in 2008 the IFPI (International Federation of Phonographic Industries) and the RIAA (Recorded Industry Association of America) turned to giving first priority to the idea that ISPs should be legally required to block ‘rogue websites’ and adopt “three strikes you’re out measures” that cut off the accounts of Internet users accused repeatedly of illicitly downloading and sharing copyright protected content online.

While not formally required by law to do so, Canadian ISPs such as Bell, Rogers, Shaw, Cogeco, Telus, Quebecor, etc. have agreements with the recorded music industries and other “copyright industries” to disable access to illicit sites. Moreover, the Terms of Service/Acceptable Use Policies explicitly state that they reserve the right to do just this.

Exactly what the conditions are, and how often they are use, well, who knows? The arrangements, as I just said, are informal — something of a blackhole rather than an open Internet, essentially.

As Rogers Acceptable User Agreement explicitly states, for example:

“Rogers reserves the right to move, remove or refuse to post any content, in whole or in part, that it, in its sole discretion, decides   . . . violates the privacy rights or intellectual property rights of others” (“Unlawful and Inappropriate Content” clause”. (also see Bell’s Acceptable User Policy, p. 1)

So, it is not that Canada is some kind of “free Internet” zone, but rather one where there terms are set privately by ISPs (our major TMI conglomerates) and the “content industries”. This seems like a really bad idea to me.

The UK adopted an even worse approach, however, by giving such measures the force of law when it passed the Digital Economy Act in 2010, a law that was sped through Parliament in near-record time (i.e. 2 hours debate) after incredible levels of lobbying from the music, film and broadcasting industries (see here). Two major ISPs in the UK, however, BT and TalkTalk, have fought these measures tooth and nail, but have suffered a series of defeats in the courts.

I recently spoke with Professor Robin Mansell, who took part in these proceedings as an expert witness on behalf of BT and TalkTalk. Her experience sheds much light on the potential impact of these measures on the evolution of the Internet and Internet users. I also asked her about the tricky role of academics in such cases, given that being an expert witness essentially bars you from discussing details of the case, a position that obviously clashes with academics’ obligation to make knowledge public.

Professor Mansell is a Canadian who completed her Ph.D. at Simon Fraser University. She is a Professor of New Media and the Internet at the London School of Economics, where she was Head of the Media and Communications Department (2006-2009). She has been a leading contributor to policy debates about the Internet, the Information Society, and new information and communication technologies. She was also President of the International Association for Media and Communication Research (IAMCR) (2004-2008) and has served as a consultant to many UN agencies as well as the World Bank. You can learn more about her here.

Although the Court of Appeals rejected BT and TalkTalk’s challenge to the Digital Economy Act in June, several other developments in the UK since May have kept the issues on a high boil and still unresolved:

  1. The Hargreaves Report published in May was scathing of the lack of evidence underlying the development of copyright policies, and how “lobbynomics” rather than evidence has been driving the policy agenda (for an earlier blog post on the report, see here);
  2. Another High Court decision in July required BT and other ISPs to block access to the site Newzbin;
  3. The Government decided to adopt all of the proposals in the Hargreaves Report in August;
  4. The measures in the Digital Economy Act requiring ISPs to block illegal file-sharing sites were put on hold in August after a report by the British telecom and media regulator, Ofcom, found that the measures would be unworkable (also here).

Dwayne: How did you become an expert witness in the BT/TalkTalk challenge to the Digital Economy Act? And who was backing the adoption of these measures?

Professor Mansell: I was invited by BT’s Legal Division to do so.  They came to me on the recommendation of another academic who was serving as an advisor to the regulator, Ofcom, and so could not do it for conflict of interest reasons.  They also invited Prof. W. Edward Steinmueller, University of Sussex, to work with me, since he is formally trained as an economist and could take on the ‘copyright economist’ from the US who was expected to appear on behalf of the creative industry actors who have pushed so hard for the law.

The key players arrayed against BT and TalkTalk, in addition to the Government, included the following members of the ‘creative industries’: the British Recorded Music Industry Association, the British Video Association, the Broadcasting Entertainment Cinematograph and Theatre Union, Film Distributors Association, Footabll Assocation Premier League, Motion Picture Association, the Musicians Association, Producers Alliance for Cinema and Television and Unite. The Open Rights Group, somewhat similar to Open Media in Canada, also filed an intervention that, essentially, supported BT and TalkTalk’s position, but from a basis steeped more in open Internet values rather mainly business considerations.

I have training in economics, but no formal degree as mine are in Communication (Political Economy) and Social Psychology.  As far as we know we were the only academics hired by BT/TalkTalk to participate in the High Court Judicial Review of the Digital Economy Act (DEA) 2010.

We realised we would be bound by confidentiality once we signed on.  In the UK, our initial report challenging the measures set out in the Act came into the public domain after the judgement, but not the evidence submitted by the creative industry players against the BT/TalkTalk case or our rebuttal to that.

We had both worked and published on issues of copyright before and felt that there was a chance that the Judge might rescind the Act – a small one, but we thought it worth trying. This was the only way we could see that the provisions of the Act might be overturned since it had got on the books in the last days of previous Labour Government.

In the event, the Judge decided that the DEA should be implemented for two main reasons 1) there is no empirical evidence of what its impact will be from anyone’s perspective – just claims and counterclaims; 2) it is for Parliament to decide how copyright legislation balances the interests of the industry and of consumer/citizens, not for the courts.  BT/TalkTalk appealed the decision and lost again.

Dwayne: What implications does the most recent court set-back have for principles of open networks/network neutrality, copyright, privacy and user created content (UCC)?

Robin: The central issue in this case was whether the ‘graduated response’, or ‘three strikes you are out’, strategy being lobbied for by the creative industries to curtail online P2P file-sharing that infringes copyright is a disproportionate response to file-sharing practices that are ubiquitous.  Another issue was also whether the implementation of the measures by ISPs (with a court order) is likely to have a chilling effect on the way people use the Internet.

From the copyright industry point of view, the central issue was whether the government and ISPs would support their argument that this strategy is essential to their ability to stem the losses they are experiencing in the music, film and broadcast programming sectors which they attribute to infringing downloading by individual users – and more importantly to enable them to recover the lost revenues, or at least some of them. The creative industries players argued that it was essential for ISPs to play an active role in stemming the tide of copyright infringement.

The bigger issue of course is whether P2P file sharing is simply indicative of one of many ways in which Internet users are finding creative ways of producing and sharing online content in a ‘remix’ culture where the norms and standards for good behaviour online have changed enormously and with little evident regard amongst some Internet users for existing copyright provisions. In the face of these changes, the incumbent creative industry companies are seeking ways of extending their control over the use of copyrighted digital information in many ways, just one of which is stronger enforcement of copyright legislation which currently makes it illegal to copy even for non-commercial purposes of private use and creates a narrow window for licensing for educational use.

BT/TalkTalk framed the issues mainly in terms of the threat to their own business interests in terms of reputational and financial costs if they are required to divulge the names of their subscribers to the creative industry firms (albeit with a court order) when they are accused of infringing copyright.

We framed the issues in four ways:

  1. the disproportionality of the DEA response in light of changing social norms and behaviours online which means that there is little if any evidence that the threat of punishment will change online behaviour;
  2. the disproportionality of the response because it sets a wide net that is very likely to encompass those who use ISP subscribers’ access to the Internet (family, friends, users at work, in public places, etc.) for purposes of which the subscribers themselves have no knowledge;
  3. the lack of disinterested evidence on industry losses and revenue recovery since all the quantitative evidence is based on creative industry data or on studies which are flawed in terms of methodology; and
  4. the implications for trust and privacy when Internet users are being monitored for this purpose.

In this specific case, the arguments did not tip over into debates about network neutrality, but they easily could have. The techniques that are used to monitor subscriber online activity go in the direction of the same deep packet inspection techniques that also enable ISPs to discriminate among different types of Internet traffic.

However in this case, they were only being asked to provide subscriber information based on the monitoring performed by firms hired by the copyright industry firms themselves to monitor spikes in volume and the sites from which downloading occurs. This doesn’t go directly to what ISPs themselves are doing or not doing with respect to monitoring types of traffic, so technically isn’t about network neutrality. The ultimate effect, however, is not all that dissimilar.

Dwayne: You have mentioned for two years running now during talks at IAMCR that the role of ‘expert witness’ is a double-edged one, on the one hand allowing scholars a seat directly at the table while on the other hedging about the scholar’s role with all kinds of requirements about the nature of the facts and evidence that can be submitted, non-disclosure agreements, etc.

Can you elaborate a bit more on this conundrum? What would be your advice to those torn between the ‘expert witness’ and ‘activist’ scholar role?

Professor Mansell: This issue is always on my mind!  The role of an ‘expert witness’ in a court case can vary a lot depending on the jurisdiction. In the UK you can end up knowing quite a lot more as a result, but you also cannot write about it in an academic way because you cannot cite the sources which remain confidential even after the case is over. After the case is over of course you can argue as you wish retrospectively, but then ‘the horse has left the barn’.

Another issue is the problem of what counts as evidence.  The courts look for some kind of irrefutable quantitative evidence. Failing that they look for persuasive theoretical arguments about how the world ‘might be’, overlooking the unrealistic assumptions about how economic incentives work in the market or they look for generalisations from fairly flimsy empirical studies about what mainly US college student report about their own copyright infringing behaviour and future intentions.

The problem for the ‘expert witness’ is that while it is possible to refute the assumptions of theory and poorly conceived methodologies, it is not possible (usually) to present quantitative empirical evidence that is any more robust because it simply doesn’t exist.  It is possible to present good arguments (based on political economy, sociological or cultural analysis of changing norms, market structures and dominant interests, and power relations).  But if you know that the Judge is likely to be persuaded mainly by the economics arguments, one is not going to get very far.

Thus, the question arises as to why enter the fray in the first place? Why not work as an activist or work as an academic to influence the policy makers directly before the legislation gets on the books?

Both routes are needed, but time constraints often mean that they are hard to achieve in a consistent way.  And of course interacting continuously with policy makers raises its own challenges.  Not the least of these is that if they are setting the agenda and are already echoing the prevailing view that the balancing of interests in copyright protection is clear and unproblematic. It is a real uphill battle to depart from this view – and a strong likelihood that the door to the room or corridor where policy decisions are made will be shut.

In the case of copyright enforcement and the UK judicial review of the DEA, there are critical scholars in the community who could have been taken on by BT/TalkTalk and who are likely to have promoted the view that the whole of the copyright regime needs to be dismantled in favour of an open commons; they were not invited to participate by those setting the terms of engagement.

The Open Rights Group did participate in the judicial review as an intervener and their argument was quoted by the Judge, but this didn’t alter his view it seems.  In terms of the academic evidence, he basically said that this was a complex issue which should not be put before the courts.

Dwayne: The Court dismissed the challenge to the Digital Economy Act, finding that it was entirely within the purview of the UK Parliament to pass laws of this kind and to strike the balance between the competing interests in the way that it did. You described this as a total loss. Can you explain why and what the implications might be?

Professor Mansell: I think I said this because the Government claimed that the DEA is aimed at balancing legitimate uses of the Internet and freedom of expression against the costs of implementing technical sanctions against Internet users, assuming authorisation by the courts.

The Court accepted our argument about the ambiguity of the results of empirical studies of online user intentions and behaviours with respect to copyright infringement. It also accepted the argument that Internet users may take steps to avoid legal liability resulting in a chilling effect on the development of the Internet. But, it did not accept that such an effect would exceed the benefits of enhanced copyright protection.

Ultimately, it left it to Parliament to decide the appropriate weighing of the interests of the creative industries and Internet users, which the Government claims has already been done in the legislation.  So we go round and round …  the DEA enforcement legislation goes ahead and the copyright legislation it is designed to enforce stays in place – a ‘total loss’ (for now till the next round).

Meanwhile the creative industries as we know are experimenting with all sorts of new business models in their bid to change the way they raise revenues through the provision of digital content.  Perhaps the shear pressure of mass Internet user activity and infringing downloading will eventually give rise to fairer models – we can wait for this to happen, but it is a shame that the rights of these users are likely to be infringed and some will be punished for behaviour that one day may be seen as entirely appropriate and even welcomed!

We argued, that in light of uncertainty about the direction of change in social norms and behaviour online, legislation that seeks to suppress P2P file-sharing by bringing legal actions against individual infringers is likely to disrupt, or alter the course of, Internet development in ways that cannot be assumed to be benign. The evidence favours the interests of the rights holders and the interests of those engaging in infringing file-sharing are downplayed or excluded. This cannot be said to be a proportionate response to the incidence of infringing file-sharing.

Since the judicial review, an independent report commissioned by the Prime Minister (The Hargreaves Report) has emphasised the need for change favouring better access to orphaned works subject to copyright and copying for private and research purposes and greater emphasis on the impact of legislation on non-rights holders and consumers.  But, it still says that the DEA provisions for the ‘graduated response/three strikes you are out’ should go ahead until such time as there may be evidence that it is not working.  Again, the harms will already have occurred even if evidence shows that the measures are not working the way the industry claims they will and Internet users continue their infringing downloading activity.

Dwayne: Last question, Robin. Do you think that the recent moves by the UK government to adopt the Hargreaves Report in whole and to put aside ISP blocking requirements change the picture?

Professor Mansell: There is a difference between the provisions in the DEA to go after individual file sharers through the ‘Graduated Response’ tactic, which is going ahead, and the concerns expressed by ministers as to whether they can get ISPs to take down the big enabling sites.  My understanding is that is the issue under discussion.

Some of the other Hargreaves recommendations may well start to go ahead – we will see how quickly, but they do not go to the specific issue of using ISPs to help bring charges against individuals.  

%d bloggers like this: