Archive

Posts Tagged ‘Open Internet’

CRTC Zero-Rating Rulings are a Significant Win for the Open Internet: Bolster Common Carriage, Competition and Cultural Policy

Two rulings by the CRTC the other day constitute a significant win for common carriage (aka ‘net neutrality’), competition, Canadians and cultural policy.

The second of the two rulings found that Videotron’s Unlimited Music program runs afoul of Canada’s telecoms law. It does so by giving an undue preference to subscribers of the company’s highest tier data plans over the rest of its customers and to the music services included in its offering such as Apple Music, Google Play, Spotify versus those left out but available over the internet, e.g. the CBC and commercial radio stations.

Combining the lessons of that decision with its 2015 Mobile TV decision (upheld by the Federal Court of Appeal last year), the CRTC took the additional step of developing a general framework that in most cases prohibits carriers from acting like publishers or broadcasters that pick and choose content-based services that don’t count towards your data caps while everything else you use the internet or your mobile phone for does. The framework also banishes pay-to-play schemes like the one in the US wherein content providers or in-house affiliates like DirecTV ‘sponsor data’ so that internet traffic from the use of their content does not count, in this case, against AT&T subscribers’ monthly data allotments.

That particular example (and a few others) had caught the eye of the FCC’s previous chairman Tom Wheeler as a potential violation of common carrier rules but has since been waived away by Trump appointment, Ajit Pai – by fiat rather than any formal proceeding. The CRTC’s new Differential Pricing Framework strikes a hard stance against pay-to-play schemes because they essentially treat the internet like a glorified cable TV system rather than the public internet where access is governed by common carrier principles.

Lastly, the CRTC’s new Differential Pricing Framework leaves some wiggle room for making decisions on the margins with respect to services that escape the Commission’s general ban on zero-rated plans. It will base its judgements on whether or not to grant an exemption to such services when they offer exceptional public interest benefits, are open to any content, app or service provider, are ‘content agnostic’, have minimal to no impact on the interoperability of all of the internet’s interlocking parts, and are not based on payola schemes (paras 126-129).

Broadband Internet Access and Mobile Wireless Providers are Common Carriers not Publishers (or Broadcasters)

The practices at issue are known as “zero-rating” and are the most recent frontier in the battle over “network neutrality”, but which I prefer to refer to as common carriage in line with the more formal terms of telecoms law and history. The decisions by the CRTC last week firm up ISPs and wireless services’ status as carriers rather than broadcasters or publishers, meaning that control and choice should be in subscribers’ hands to the greatest extent possible versus those of the companies. In this sense, the rulings are all about power and control, and the fact that the CRTC decided that more power and control should rest with subscribers, content providers and would-be rivals has the incumbents and their cheerleaders up in a ruckus.

The decisions mean that ISPs and mobile wireless providers like Bell, Telus, Videotron, Shaw, Teksavvy and Rogers generally cannot pick and choose which services, content and apps won’t count toward your monthly data caps and which will. While the rulings do not add much that is new to the landscape, they do clarify the rules-of-the-road and aim to head off a regulatory game of whack-a-mole as ISPs and wireless companies try to skirt the principle of common carriage that those who control the medium should not control the messages that flow through them. Put this all together with the Telecommunications Act’s rules outlawing unreasonable discrimination between both users and services, the CRTC’s network neutrality rules, and last year’s Federal Court of Appeal ruling that upheld the Commission’s 2015 Bell’s Mobile TV ruling, yesterday’s decisions strengthen as well as clarify the net neutrality regime in Canada.

No Such Thing as a Free Lunch

All the big incumbent telcos and ISPs, except Rogers, and their hangers-on from the consultancy world, argued that banning zero-rating denies consumers access to ‘free stuff’ that they like a lot while undercutting a useful tool that can encourage affordability and adoption of the internet. It also, they say, removes a key source of innovation and competitive rivalry (see, for example, Trump’s FCC Chair appointment, Ajit Pai, Mark Goldberg, Roslyn Layton).

Such advocacy, however, is better seen as an attempt to wrap commercial aims in noble public interest garb. The commission gave short shrift to their claims, and for good reason: programs like Videotron’s Unlimited Music and Bell’s Mobile TV target subscribers for their most expensive data plans rather than promote affordable internet access and wireless services for those most likely to need the help. Moreover, there are better ways to deal with the issues including fostering more competition and defining broadband internet access as a basic service so that more forceful regulatory and policy steps can be taken to meet such goals if “the market” fails to do so. The CRTC did just that late last year when it defined broadband access at speeds of 50 Mbps down and 10 Mbps up a basic service, and thus this decision needs to be seen in that context (paras 68-70).

The Peculiar Structure of the Communications and Media Industries in Canada Require Strong Common Carrier Rules

The claim that zero-rated services give people ‘free internet’ also collides with the fact that ISPs and mobile wireless operators that do not use zero-rating have subscription prices that tend to be significantly more affordable and with data allowances that are twice as high on average as those which do use zero-rating (Rewheel, 2016).

Prices also tend to be even higher and data caps even lower where vertical integration and diagonal integration are extensive (i.e. vertical integration is when a firm owns the network as well as content services that rely on it, while diagonal integration is when a firm that owns a wireline network also owns a wireless network). This is of special importance to Canada given the very peculiar structure of its communication markets.

Concentration levels in broadband internet access and mobile wireless markets around the world tend to be “astonishingly high”. This is true in Canada too.

The extraordinarily high levels of vertical and diagonal integration in Canada, however, is what puts us in a league of our own and, crucially, begets the need for especially tough common carrier rules. Take, for example, the fact that Bell, Rogers, Shaw and Quebecor’s Videotron own all the main commercial TV services in Canada (185 in total). Add in Telus, which is not vertically-integrated, and the top five players in Canada account for nearly three-quarters of the broadly drawn network media economy (see here and here).

In addition, the last stand-alone mobile wireless operator in Canada, Wind, was acquired by Shaw last year, and made a branch of this vertically- and diagonally-integrated giant. This is of great importance because where there are stand-alone or wireless-centric operators like T-Mobile and Sprint in the US, or 3, Free, Tele2 and Play in Europe, data plans tend to be more affordable and have data allowances that can be six- to ten-times as high as their vertically- and/or diagonally-integrated counterparts!

In short, the market in Canada is structurally biased toward carrier control, high subscription prices and low data caps. We need especially tough rules to deal with these exceptional conditions. The CRTC’s general ban on content-based zero-rating services addresses these realities head on – and pushes back against them but stops short of addressing the issue of data caps directly, as some groups like Open Media advocated (paras 40, 56-58).

The CRTC Just Says No to ‘Balkanizing the Net’

As the CRTC’s rulings also observe, zero-rated services can impose significant costs on other content, app and service providers who must meet the technical design specs and other administrative criteria of zero-rating platforms (paras 41-43). Even Facebook has experienced long delays in designing its service for T-Mobile’s Binge On service in the US. News media have had similar experiences with respect to Google’s AMP and Facebook’s Instant Article platforms. The lesson of those experiences is clear: while theoretically open to all, only the biggest players tend to be able to incur the substantial expense needed to design their services for these platforms — and walk away if things don’t work out as they hoped.

Facebook’s Instant Articles platform illustrates the point. It is chock-a-block full of the biggest news organizations in Canada and the world, such as the CBC, Postmedia, the New York Times, Wall Street Journal, Financial Times, Guardian, etc. New ventures like Canadaland, iPolitics, the Tyee, etc. are conspicuously absent (see here and here). Yet, even after expending much time, money and expertise, the New York Times, Vice, the Los Angeles Times, Forbes, the Chicago Tribune, and several Hearst publications have walked away from Facebook’s Instant Articles platform due to lackluster results and the perceived loss of control over their content, audience data and revenue (see here). In short, while advocates tout zero-rated plans as being pro-innovation, competition and consumer – and basically “free” – they are nothing but. The fact that they feature the biggest brands suggests that they reinforce the power and control of both blockbuster brands and the platforms that host them.

As the CRTC’s ruling observes, having to negotiate deals with and design services to meet the technical requirements of multiple ISPs’ zero-rated platforms across Canada would impose heavy burdens on content creators. It would also insert a new gatekeeper between them and audiences, using the seduction of “free stuff” to influence people’s selection of content, apps and services in ways that steer them away from the general internet towards the companies’ own offerings.

The lure of free, in other words, would tilt the field in favour of walled gardens built around proprietary standards and against the public internet based on common protocols (e.g. TCP/IP, HTML, etc.). Indeed, this is why some of the cultural groups took a stance against zero-rated plans, including, and unusually, l’Association Québécoise de l’Industrie du Disque, Du spectacle et de la Vidéo (ADISQ), and the Independent Broadcast Group (paras 37, 43, 53-58).

Common Carriage is Good for Culture (Policy)

The ruling is not just good for common carriage and competition but for Canadians and culture policy. While the Canadian Media Producers Association and CBC called upon the CRTC to use zero-rating to promote Canadian content, its swift rejection of that idea is based on the principle that communication networks should not be tied to the pursuit of such goals. The ruling’s hat-tip to people’s privacy and the concerns it raises with respect to how zero-rated plans could discourage the use of virtual privacy networks further points to values and uses of the internet that are rooted in the culture of people’s everyday lives versus the ‘systems-control’ and cable TV-centric model of cultural policy that has prevailed for much of the past century (paras 106-113).

Competition Should Be Based on Substantive Factors vs Marketing Gimmicks

While the strict limits on content-based zero-rated plans applies to all ISPs and wireless carriers, the decision’s biggest loser in all this is likely Videotron. It was the complaints filed by Jean-Francois Mezei and PIAC against its Unlimited Music program that kicked off the review to begin with, and to them we can be thankful. Some, however, worry that this outcome could undermine the more competitive wireless market its presence has fostered in Quebec.

That Videotron has spurred on greater competition in the province, there is no doubt. However, as the CRTC’s decision makes clear, rather than using marketing gimmicks like zero-rating, Videotron and its competitors should compete based on price, speed, data cap size, quality of service, network security and privacy. Marketing gimmicks like zero-rating, in contrast, obscure the fact that, operating in highly consolidated markets, the big telcos and ISPs don’t like to compete on price so that they can maintain fat profit margins in the 20-40% range (with Videotron and Wind, now renamed “Freedom”, at the low end and Rogers, Bell and TELUS at the other end). Either way, their profits are two- to four-times more than the average for Canadian industry – which in itself is a proxy for their dominant market power (paras 47-59).

Moreover, the CRTC and government policy has already developed a regulatory framework in the companies’ favour by refusing to mandate wholesale access to their mobile networks for MVNOs (mobile virtual network operators) and, some fear, in the details of the wholesale access regimes that are currently being cobbled together for fibre-to-the-doorstep internet access and mobile wireless. Why should policy makers put their thumbs on the scales even further by sacrificing essential principles like common carriage, freedom of expression, privacy and a new approach to cultural policy to the incumbents’ desire to skirmish with one another on the margins using marketing gimmicks like zero-rating rather than in a full-out battle for minds and market share?

Some also worry that the CRTC’s decision throws Videotron’s customers under the bus. Yet, as the decision makes clear, it is only its high-end subscribers with access to the Unlimited Music program who will feel the pain. Moreover, the bottom line is that unduly discriminating between customers is against the law. The CRTC not only found Videotron to be offside with respect to the long-standing undue preference rules of the Telecommunications Act and the other underpinnings that have come to define network neutrality in Canada, it also held up Videotron Unlimited Music service as precisely the kind of content-specific zero-rating service that will be a non-starter from here on out. The company has until July 19th of this year to remove the service.

Ultimately, it is Videotron that played chicken with the Commission, and thus with the interests of its subscribers. Well acquainted with the law and having been at the heart of the Mobile TV rulings that have clarified the rules-of-the-road over the past few years, Videotron chose to roll the dice anyway in the hope that its gamble would pay-off. It didn’t, and it lost. In short, it has nobody to blame but itself for the consequences that befell its customers.

Yet, while this valuable subset of Videotron’s subscribers might indeed suffer, the alternative course of action – blessing zero-rating — would cause more collateral damage for Canadians in general, to competition across the internet- and wireless-centric communications and media economy (over-and-above its effects within the communications industry), and to content and cultural creators. The latter would have to shoulder additional costs and other burdens while ceding yet more power to ISPs and ‘platforms’ at precisely the moment in time when they need as much good fortune as they can manage to muster to chart a course through the turbulent waters that the content media industries now face.

While Videotron and others suggest that the company might yet find a way out of its predicament – perhaps by creatively rejigging its offering yet again (although there does not seem much hope for that), or perhaps by appealing to Cabinet or the Courts – a victory by either of the first two of these options would constitute a serious blow to a good decision that has been a long time in the making. It would also look bad in the context of the whole of the situation as well, a situation in which current chairman J. P. Blais has distinguished himself from his predecessors since the Commission’s first big decision under his leadership in 2012 when BCE’s bid to acquire Astral was spiked. Last week’s zero-rating ruling could be the last of the Commission’s big rulings under Blais direction. In this sense, the two decisions could ultimately constitute book-ends to his tenure at the Commission. For Cabinet to force the Commission to revisit, revise or repeal the ruling would send a signal that when the stakes are down no matter what the independent regulator does, the Government – whether Conservative or Liberal – will swoop in to protect the interests of Canada’s incumbent telco and ISP giants while throwing the interests of independent regulators and the Canadian public under the bus.

On this score, the current Liberal Government’s track-record so far is mixed. It’s decision to reject Bell’s appeals to reverse the wholesale fibre-to-the-doorstep regime and the CRTC’s decision to suspend the rules that reserve the Canadian advertising market during the Super Bowl to Canadian broadcasters (i.e. Bell’s CTV has the rights at present), have been steps in the right direction. It’s lackluster response to the CRTC and others’ entreaties to take an active role in making sure that affordable broadband access is available to all Canadians and its decision to bless BCE’s take-over of one of the last significant telco-ISP, Manitoba’s MTS, have been deeply disappointing.

These are political calculations that the government will have to make, but a sober review of the facts on the ground in the zero-rating case suggests there’s no reason for rash judgements, the screams of bloody murder by the incumbents’ defense league notwithstanding. Dial back the hyperbole, and the reality is that the Commission’s zero-rating decision does not establish a lot of new facts on the ground but clarifies the rules of the road while firmly rebuffing the incumbents’ strident efforts that aim to remake the internet more like Cable TV. The sign posts that zero-rated plans are a non-starter have been there all along but the incumbents have tried to run roughshod over them only to be turned back each time – by the CRTC and by the Courts who have reaffirmed its authority to take the steps it has. This decision warns them of the consequences once again.

Exceptions to the Rule and Bolting the Barn Door After the Horse is Gone?

While tough, the new framework is also flexible and balanced insofar that the general ban on content specific zero-rating services like Videotron’s Music Unlimited or Bell’s Mobile TV services will not apply to managed services like IPTV services and Internet-of-Things uses such as telemedicine applications or Microsoft’s Xbox and Sony’s Play Station (para 9). That adds much upside for the telcos and is line with the kinds of things that the Commission heard both from them as well as internet and mobile wireless equipment makers like Sandvine and Cisco during the hearings. However, it is also the case that managed services will now likely emerge as the next frontier of battles over common carriage (net neutrality).

This is because managed services are not hardwired into networks with clearly drawn lines between them on the one side and ‘the public internet’ on the other. Instead, they are a function of software drawn lines in the sand whose precise location only the telco-ISPs really know. There is room for must mischief here, and the track-record of telecom history for over 150 years almost certainly guarantees that we will have it.

That this is so was on full display in the Federal Court of Appeal’s ruling that rejected Bell’s appeal of the CRTC’s decision in the Mobile TV case. The case was precisely about drawing lines between telecommunications on one side and broadcasting on the other. Bell sought to exploit such ambiguities to offer its Mobile TV service to subscribers in a way that was clearly off-side according to telecoms rules but just fine if its activities could be shoe-horned into the broadcasting mold. As the court stated,

. . . Technology has evolved to the point where television programs are transmitted using the same network as voice and other data communications (para 22) . . . . [I]t was reasonable for the CRTC to determine that Bell Mobility, when it was transmitting programs as part of a network that simultaneously transmits voice and other data content, was merely providing the mode of transmission thereof – regardless of the type of content – and, in carrying on this function, was not engaging the policy objectives of the Broadcasting Act. The activity in question in this case related to the delivery of the programs – not the content of the programs – and therefore, the policy objectives of the Telecommunications Act (para 53).

That the CRTC has exempted ‘managed services’ form its zero-rating framework and decided to take an ‘ex post’, case-by-case review of cases as they arise is a potential weakness of the decision precisely because it greatly increases the chances that battles on these new frontiers will continue apace (paras 122-125). Ex post rules favour those with the deepest pockets, as well, and this too skews the field in the favour of Bell, Rogers, Shaw, Videotron, Telus, et. al. In other words, any claims that the Commission has not given due consideration to their concerns is blind to these realities.

The fact that the ruling also permits a small number of content-agnostic zero-rated applications, such as when an ISP does not count internet usage against your data caps when used during off-peak hours or to manage your bill and subscriber account, is another example of common sense and flexibility being built into the decision (paras 98-100).

Lastly, the Commission held the door open ever so slightly to the possibility that a new service or application might arise that offers exceptional public interest benefits that deserves to be zero-rated. To this end, it opened a path for anyone considering such an option to consult with Commission staff before launching (paras 126-129). It also adopted a case-by-case approach to ruling on complaints. Anyone who thinks a zero-rated offer crosses the bright line rules restricting such offerings could bring a case to the CRTC.

Some see this as a slippery slope at odds with the general ban on zero-rating, but the Commission’s recent track-record on the Mobile TV, illicoTV and now Videotron’s Unlimited Music services stand as firm markers that it is willing to stand firm. Yet, whether that will continue after the present Chair, however, is another matter to which the government should be attentive, yet even then, the Differential Pricing Framework does seem to limit the scope for exceptions to the general rule. Once again, however, there is also no denying that the ‘after-the-fact’ (ex post) approach favours incumbents while putting the burden on individuals – whether deep-pocketed industry rivals or the proverbial David battling the Telecom Goliaths for justice and a communications system fit for Canadian citizens in the ‘internet age’.

Where Things Stand: Canada and the Rest of the World

The debate over zero-rating has constituted the frontlines in the battle over common carriage and the internet for the past three- or four-years. That terrain has shifted in this time and is shifting fast now. More than forty countries have addressed the issues, in a wide variety of ways, including the US, the EU28, the Netherlands, Slovenia, Chile and India. So, this begs the question, are the new Canadian zero-rating rules – and common carriage principles more generally – at the strong or weak end of the spectrum?

Relative to conditions in the US under the Obama-era FCC, it is hard to say definitively one way or another: both the Obama-era FCC’s and CRTC’s rules lay towards the strong end of the scale. However, just as President Trump has been ruling by Executive Order, the newly appointed FCC chair, Ajit Pai, has been ruling by fiat to dismantle the strong common carrier rules put into place by his predecessor. Prior to this recent turn, however, zero-rating practices had been banned in a few specific cases as a condition of ownership change approvals but were mostly still under review, but with a proposed regulatory framework put on the table just before the change in administrations.

At first blush, the new CRTC Differential Pricing Framework appears to be tougher than what had held sway in the US even before the Trump Administration arrived in town. Yet, a few things temper that view.

First, constraints on zero-rating in the US that were put into place after broadband internet access was reclassified as a common carrier service in 2015 were just one part of a still-developing picture that also included a ban or limits on the use of data caps as a condition of merger and acquisition approvals (e.g. Charter Communications acquisition of Time Warner Cable and Brighthouse Cable last year). Without data caps, zero-rated plans are redundant.

Second, they were also the focus of ongoing study by a working group dedicated to the task within the FCC but led by well-known internet economist Shane Greenstein and a variety of others from within the telecoms industry and across the media and economy more broadly (claims that the FCC has been an economic free zone are complete bunk in light of these and many other basic facts). The totality of these efforts and the longer evolution of attention to the issues at hand are strengths not present in the Canadian context.

The most decisive point, however, is that conditions in the US are different than those in Canada, and those differences arguably justify the tougher rules that at least now exist on paper in this country. Unlike Canada where vertical and diagonal integration is the norm at Bell, Rogers, Quebecor and Shaw in the US it is the exception and there is only one US company that stands close to them in terms of size and structure: Comcast, but then again, even it doesn’t really have a mobile wireless operation, although it has just recently announced the launch of a MVNO – which is very different than the large-scale, facilities based operations of its Canadian counterparts. It’s share of the total telecoms-internet and media market is 11%; Bell’s share of the Canadian market is two-and-a-half times that amount (28%).

Vertical and diagonal integration are not the pivot upon which these questions about carriers’ undue control over content and consumers turn but the more prominent those phenomena are the more pronounced the problems are (e.g. high rates, low data caps, punishing overage charges, excessive control, privacy, etc.) and the greater the need for strong rules. In this respect, the CRTC is on the mark, while its soft stance toward managed services, potential exceptions and ex post review may turn out to be weak points, the exploitation of which will need to be aggressively defended against in the time ahead.

Relative to the EU28, developments are too new and evolving to say with any certainty. But there, too, the adoption of new guidelines on net neutrality last year also put those who would use zero-rating on notice that such efforts would be closely monitored, much like the FCC was doing in the US. However, unlike the US, the EU rules are weaker than those of the late-Obama era FCC because they stand all on their own without the FCC’s working groups and merger & acquisition reviews. They also lack the general applicability of the CRTC’s framework.

Lastly, the CRTC’s rules are also similar in style and strength to those adopted by the Telecommunications Regulatory Authority of India, which banned zero-rating across the board but on an ex ante basis. TRAI did so in the face of the staunchest of opposition from some of the world’s biggest digital giants, notably Facebook, which led the charge, flanked by the same ideological warriors that have also led the defense of Bell, Telus and Videotron in Canada, e.g. Jeff Eisenach, who wrote a brief commissioned and submitted to TRAI in India by Facebook and which Telus wheeled into action largely unchanged within the Canadian context (see comparisons of both documents here and here). And so too did Roslyn Layton make the case for why advertising supported mobile phone and internet access was a “good thing” in both cases (see here).

Avoiding Getting Sucked into Trump’s Vortex

Their contributions are especially important in this context because Eisenach and Layton are two of three members of President Trump’s Telecom Policy Team (the other is Mark Jamison). They have been leading the charge in the US and worldwide to roll back the successes that have been chalked up in recent years for common carriage, competition and people’s rights as citizens and consumers to use the phone and internet connections they subscribe to as they damn well please, and without the distraction of ‘free baubles’ getting in the way and threatening that freedom at every turn. Their efforts are backed by a dubious President and conservative, business-beholden think tanks like the American Enterprise Institute, Technology Liberation Front, Free State Foundation, Information Technology and Innovation Fund, Mercatus Centre, and other such groups. On the scholar/corporate lobbyist connection in which Eisenach looms large, see the New York Times piece here.

As indicated above, their ideas have been imported into Canada and put onto the public record of CRTC proceeding by Telus and Bell, and made part of the broader discussion by the same and other industry cheer-leading consultants. Their ideas are worthwhile reading but ought to be given short shrift and generally have been – unless following a Trump-like agenda appeals to you. Yet, as they take their cues from Ayn Rand it is time that we take ours from those like Hannah Arendt, who raised questions about how want to live and tailor the institutional arrangements of society so that people’s freedom, dignity and capacity to live in a democratic and just society can flourish.

The Battles Ahead

As per my usual, this is once again way too long for a blog post. I’m sorry. But, to paraphrase Mark Twain, I wanted to write you a short letter but I didn’t have enough time so I wrote you a long one instead.

That said, where do things stand? They stand in a good spot, generally speaking. The CRTC has adopted rules that are well in line with the Telecommunications Act’s long-standing provisions with respect to no undue discrimination between subscribers and services — a cornerstone of common carriage. It is also of a piece with developments in the last three years in which efforts by the carriers to act like publishers choosing which content, apps and service their customers will get for free and which will be discouraged by dint of counting against people’s data caps have been thwarted by the Commission and the Courts each step of the way.

That was the lesson of the Mobile TV case, and it is the lesson from last week’s decision that put the kibosh on Videotron’s Unlimited Music service. There are no surprises here. Things follow a logic and a well-lit path.

There is also reasonable recognition of the incumbent providers’ interests and flexibility with respect to managed services and the exceptions for administrative type services, etc., while the door has been kept ajar to new services that might come along and where zero-rating them makes sense. Whether the managed services exception and the ex post approach that the Commission has adopted, however, emerge as major battle zones in which the incumbent telcos and ISPs continue their efforts to remake the internet in the image of Cable TV, only time will tell. The openings afforded by this aspect of the decision are its weakest links, so we must be very alert to such prospects.

Crucially, the CRTC’s bright line rules on zero-rating also conform to the peculiar realities of the Canadian communications market, characterized as it is by extremely high levels of vertical and diagonal integration in which all of the biggest wireline and wireless networks are owned by Telus, Rogers, Bell, Videotron and Shaw, all of which – except Telus – own all of the main TV services (except the CBC and Netflix) and several of the most important sports teams (e.g. the Montreal Canadians, Maple Leafs, Raptors, etc.) in the country. This is without parallel and thus it is entirely appropriate that the CRTC’s rules have taken the particularly tough form they have.

Underneath all of this is just common sense: common carriage is essential to ensure that those who own and control the medium and who have all the incentives and ability in the world to control and influence the content, activities, services and interactions that take place through their networks don’t make good on those potentials. In short, that potential needs to be constrained by tough rules, enforced by a regulator with a spine. The CRTC has shown that spine, but will no doubt experience incredible blow back for doing so. It already is.

The question is, will the current Liberal Government have the spine to back the independent regulator, or will it cave in the face of the immense pressure that it will no doubt face? That pressure will come from the biggest industrial interests in the land, who have been adding ideological winds to their sails from the gusts now blowing North from the Trump Administration, an administration that appears to relish ruling by Executive Order and administrative fiat, with nary a care for the conventions, culture and values of democracy. This is not a model to emulate.

Communications are the lifeblood of a democratic society and culture, and so these things matter. Now is the time for steps to be taken to ensure that competitive realities as well as the needs of citizens, consumers and cultural creators are embedded within the institutions and rules-of-the road that will define the increasingly internet- and mobile wireless-centric communications and media universe of the 21st Century.

The CRTC has taken steps to do just that, for which Canadians can and should be thankful. Now it’s time for the Liberal Government to step up to the plate. Will it? Time will tell.

David Wins Against Goliath: CRTC Bolsters “Net Neutrality”, Limits “Zero-Rating” & Strengthens Local TV

Today’s trilogy of CRTC decisions on “network neutrality”, local TV and simultaneous substitution are a huge win for Canadian citizens. They reinforce Canada’s network neutrality regime while backstopping local, over-the-air TV as a viable alternative to cable and satellite and as an important source of news and information.

Of the three decisions, the most important is probably the Mobile TV ruling. The decision responds to a complaint filed by Ben Klass with the Commission in late 2013 about Bell’s Mobile TV offering that allows Bell Mobility subscribers to access 10 hours of television programs for $5 per month while watching the same amount of TV on your wireless device from the CBC, YouTube or Netflix, for example, would cost up to $40 – an 800% difference. Klass’s complaint expanded in early 2014 after the Public Interest Advocacy Centre raised concerns about Rogers and Videotron’s Mobile TV services on much the same grounds. The CRTC then wrapped them into one proceeding. Today’s major decision supports Klass and PIAC’s claims.

In each case, watching television programs delivered over the internet on your mobile device from sources outside one of the carriers’ TV packages counted towards your data caps, while those inside their Mobile TV offerings did not.

Recognizing that they were likely fighting a losing battle, Rogers folded on the case last summer and Videotron began to phase out its preferentially priced Mobile TV service at the end of 2014. Bell soldiered on, however, claiming that despite being delivered over the internet and the same wireless networks as any other data, video, voice or internet services that subscribers might use, it’s Mobile TV service was not a telecom or internet service at all.

According to Bell, its Mobile TV service is a broadcasting service, and thus outside the reach of the charges that Klass and PIAC raised. Moreover, far from this being a bad thing, its Mobile TV service is making substantial contributions to the policy aims of the Broadcasting Act, Bell argued.

The CRTC’s decision resolutely rejects that claim. While the decision refers to Bell and Videotron’s Mobile TV services, since the latter has been phasing out the version of its service in question since the beginning of the year, the biggest impact of the decision will fall on Bell.

With respect to whether Mobile TV services are telecommunications or broadcast services, the Commission was crystal clear:

Bell Mobility and Videotron are . . . providing telecommunications services in regard to the transport of their mobile TV services to subscribers’ mobile devices, and are therefore subject to the Telecommunications Act (para 35).

In addition, the Commission is clear that far from being a good thing for Canadians and the aims of the Broadcasting Act, the services work a:

 . . . disadvantage to consumers in accessing other Canadian programs on their mobile devices, and . . . could not be said to further these [the Broadcasting Act] objectives (para 60).

Furthermore, Bell and Videotron’s claims about their Mobile TV services being good for Canadians lacked “quantifiable evidence to back the magnitude of those claims” (para 39).

Having found that the key issues revolved around telecommunications, the CRTC than turned to the heart of the matter: were the carriers giving their own Mobile TV services an advantage and, if so, was that advantage unreasonable? Again, the Commission is unequivocal. By charging one rate and exempting their own services from their data caps while charging much higher rates and applying data caps to all others, Bell and Videotron are giving themselves an unfair advantage.

Here’s the centerpiece of the decision in this regard:

Bell Mobility and Videotron, in providing the data connectivity and transport required for consumers to access the mobile TV services at substantially lower costs to those consumers relative to other audiovisual content services, have conferred upon consumers of their services, as well as upon their services, an undue and unreasonable preference, in violation of subsection 27(2) of the Telecommunications Act. In addition, they have subjected their subscribers who consume other audiovisual content services that are subject to data charges, and these other services, to an undue and unreasonable disadvantage, in violation of subsection 27(2) of the Telecommunications Act(Para 61).

Crucially, in making this decision, the CRTC saw the issues being raised by Klass and PIAC as something of a litmus test case, a test whose resolution would hold much in store for the evolution of the internet and the converging media ecology in the future. Again, as it says,

the preference given in relation to the transport of Bell Mobility’s and Videotron’s mobile TV services to subscribers’ mobile devices, and the corresponding disadvantage in relation to . . . other audiovisual content services available over the Internet, will grow and will have a material impact on consumers, and other audiovisual content services in particular. . . . [I]t may end up inhibiting the introduction and growth of other mobile TV services accessed over the Internet, which reduces innovation and consumer choice (para 58).

In short, the decision responds to current realities while looking to the future. It took the opportunity delivered up to it by a hard-working and careful student, Klass, and the additional effort by PIAC, to nip a problem in the bud. The fact that the issues raised complex issues today as well as for the years, even decades, ahead, also helps explain why this decision was more than a year in the making rather than the usual four months or so.

The Mobile TV decision effectively limits zero-rating in Canada, a practice where some internet content services pay to obtain fast lanes and exemption from carriers’ data caps. Doing so reinforces Canada’s strong “network neutrality” rules and places it shoulder-to-shoulder with other countries where zero-rating has been banned (e.g. Netherlands, Sweden, Chile) or discouraged and not practiced by wireless companies (e.g. Norway, Finland, Sweden, Estonia, Lithuania, Latvia, Malta and Iceland).

The upshot is an unambiguous win for strong “Network Neutrality/open internet” rules, including their unambiguous application to wireless internet access. As Blais put it, “the Mobile TV decision is all about Canadians having fair & equal access to content of their choice on internet. There will be no fast lanes & slow lanes”. It is about keeping control over what people access through the internet in their hands, not under the editorial control of ISPs and telecoms companies.

Three other things about the Mobile TV decision stand out.

First, it’s a message that Canada can send, with love, to the United States as the FCC gets set to decide in the next month on many of the same issues to replace its relatively weak ‘Open Internet’ principles that were tossed out by the courts in last year’s Verizon decision. With strong encouragement from Obama, the FCC is widely seen as leaning toward reinstating Title II common carrier classification for all broadband internet access providers – wireline, cable, wireless – and restricting zero-rating practices. This will reverse decisions taken in 2002, 2005 and 2007 under the Republican controlled FCC that redefined high speed internet access by cable, DSL and wireless as ‘information services’ and, thus, beyond the reach of the regulator.

Second, the CRTC decision rests entirely on the common carriage principle at the heart of the Telecommunications Act (namely sections 27 and, less so, 28) rather than its so-called network neutrality rules. This is a good thing because it returns the politics of the internet to sturdier ground, i.e. the centuries old and battle-tested grounds of common carriage versus the woollier notion of network neutrality.

Third, the concurring opinion of CRTC Commissioner Raj Shoan at the end of the decision is a must read. His ruminations on the ‘cone of silence’ around the issues raised by the Mobile TV proceeding reminds us that in an industry dominated by a handful of massive vertically-integrated companies who control access to distribution networks, content and audiences, a pervasive fear seems to have settled in amongst independent TV broadcasters, creators and others that appears to have kept them from stepping forward. It reminds us that the Canadian media industry is a tight and closed, if not so cozy, community where independent voices step forward at their own peril.

As Shoan observes, when “students, not-for-profits and charities have to contend against the deep pockets of large, national, vertically integrated entities in order to bring to light relevant issues of public interest without the support of affected parties (i.e. Canadian broadcasters)”, we are in trouble. The CRTC looked at that reality today squarely in the face and made three bold decisions that go someway to addressing the issues. We can be thankful to smart and interested citizens such as Ben Klass and public interest groups like PIAC for lighting the spark and all the hard work that led to today’s decision, and for groups such as Open Media for keeping these issues in the public eye. For all those who have stood as defenders of the status quo, indeed, often as their mouthpieces, it should be a message.

A few quick words on the other two decisions regarding keeping local television alive and simultaneous substitution.

First, Blais made it clear that maintaining local, over-the-air television is important to Canadians, as citizens, not just consumers. Why? Because that is where many of us still get a great deal of our news and information from. Blais did not mince words: the major TV companies have obtained enormous privileges, and it is time to meet their obligations. “An informed citizenry cannot be sacrificed on alter of corporate profits & debt reduction”, he intoned, in an implied reference to the steady flow of cut-backs and journalist lay-offs in an industry that has been allowed to bulk up through mergers and acquisitions on the promise that synergies would deliver benefits, not just to the corporate bottom line but to all Canadians. It’s time to deliver.

Local TV also needs to be kept alive because it provides a realistic alternative to cable, satellite and IPTV providers who have consistently raised prices far in excess of the rate of inflation. This is especially so because, as Greg Taylor, Steven May and others from Ryerson University, have made clear, Canada has recently completed the switch over to digital over-the-air television. The benefits of this now need to be nurtured rather than given a still birth by those whose loyalties are, at best, split between seeing things through versus protecting their cable, satellite and IPTV distribution networks, i.e. the same entities that own most of the local TV stations and the biggest cable, satellite and IPTV companies in Canada are one and the same: Bell, Shaw, Rogers and Quebecor. Brandishing updated bunny-ears as a prop, Blais encouraged Canadians to think about them as a viable option that was both free and of higher quality in terms of picture clarity.

There will be no new revenue stream from fee for carriage of local TV stations, a cornerstone of Bell’s submission to the Talk TV hearings. However, neither will one of the cornerstones that have supported the commercial viability of local TV since the 1970s be taken away: simultaneous substitution that allows Canadian broadcasters to substitute their commercials on US signals airing the same programs and carried by cable and satellite companies in Canada. The policy is a massive gift that delivers about a quarter-of-a billion dollars a year to the industry. This was a “big ask” for Bell, Shaw, Rogers and other television companies and, for all intents and purposes, they got it today from the CRTC, with the exception for the SuperBowl starting in 2016.

In sum, today’s CRTC decisions are bold. They send a clear message in support of an open internet, broadly interpreted to cover mobile wireless, cable and wireline networks. TV is not dead, and in fact, the evolution of the two are fundamentally intertwined, and need to be thought of as such. The CRTC’s decisions go a long way to doing just that. The decisions, in particular the open internet, Mobile TV and future of local TV parts, underscore the decisive role of independent voices, and the importance of listening to them, rather than just to incumbents and far too many scribes (but certainly not all) who think that relaying the views of rival media giants on a particular issue, and a financial analyst or two, to the Canadian public constitutes ‘balanced’ reporting.

Big New Global Threat to the Internet or Paper Tiger?, Part II: Is the Internet Telecoms?

This is the second in a series of posts that takes a critical look at claims that proposed changes to the international telecommunications regulations (ITRs) at the WCIT meeting later this year could see the ITU establish “international control over the internet”.

My previous post described some of the background to the issues, and three key claims that are being made: (1) the ITU currently has no role with respect to the Internet but is hell-bent on changing this at WCIT; (2) the ITU is a state-run telecom club; (3) that it is a Trojan Horse for a plot by authoritarian states and legacy telcos to impose a new Web 3.0 Model – Controlled National Internet–Media Spaces – over the open global internet.

I think the claims are overblown. I do not believe that the ITU is intending to, or capable of, taking over the internet.  I mostly agree with Milton Mueller that most of the changes being discussed are mainly about economics and interconnection rather than internet censorship and control. An article in the New York Times today expressed a similar view as well.

In contrast to Mueller, though, I think that the ITU already has a legitimate claim to having a say with respect to the internet, and more to the point, it has already been playing such a role through the last dozen years of active participation in the multi-stakeholder model of internet governance.

Mueller argues that the ITU’s most important efforts to stake a claim to the internet terrain — domain name system (1996), the two phases of WSIS (2002-2005), IP address management (2009-2010), suggestions for a UN Committee on Internet Related Policies (2011) – have all been mostly failures, not least because they have all been staunchly resisted by the U.S. government. As he says, the U.S. Government ”squashed” an early campaign by the ITU and ISOC to wrestle control of the international domain name system from the U.S. “like a bug”.

Two years later, ICANN – a California-based non-profit still dependent on the US government today and increasingly embroiled in high-stakes battles over copyright worldwide (i.e. MegaUpload, Rojadirecta) – was created. Mueller is happy about this state-of-affairs. I am less so, but am under no illusions that the best path to choose is obvious.

If Professor Mueller is right, however, we might not have to choose. The ITU has no jurisdiction over the internet, he argues, just telecommunications. According to him, this is because, beginning fifty years ago during the FCC’s Computer I, II and III inquiries (c. 1965-2002), the U.S. drew a clear, bright line between telecom-based services (pipes and carriage) and computer-based information services (content and the internet).

The Computer II rules formalized the distinction between “basic” telecoms and “enhanced” information services after protracted struggles over key questions about market concentration in the telecom and information industries as well as the range of services to be delivered by the market versus those considered public goods. Many argue that the new rules were wildly successful, not least in terms of fueling the growth of the Internet. I am inclined to agree but would ratchet down the superlatives, without losing focus on issues of market concentration and the public goods nature of telecom, media and internet goods.

The rules were never straight-forward, and have been mired in political and legal mud ever since their adoption. The Supreme Court’s Brand X ruling in 2005 re-affirmed the rule, but in doing so basically set the enhanced service designation up as a near insurmountable barrier to formal net neutrality rules that can be applied to all carriers and ISPs.

The problem that I see with this argument is three-fold. First, it takes U.S. law as the world’s law. U.S. telecom policy, however, is not global internet policy, nor should it be. Moreover, if the basic/enhanced dichotomy has been mired in controversy in the U.S. for a half-century, just imagine its fate at the global level.

Second, the U.S. can slice and dice the definition of telecoms any way it sees fit, but other countries do things differently, and the ITU defines telecommunication very broadly as: “Any transmission, emission or reception of signs, signals, writing, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems” (Constitution, Annex; ITR, Article 2.1). A plain reading of the definitions suggests that it includes the internet, which in fact is the view that the ITU and many of its member-states take.

Clearly, though, there is a debate over the scope of that definition and things will not be solved by recourse to formal definitions, however, but by the politics of language. Those opposed will stand firm against any formal references to the internet in the text of the ITRs, while those on the opposite side will pepper the rules with as many explicit references to the internet as possible. The fact that various members have proposed modifications or additions to at least a half-dozen sections of the ITRs that explicitly refer to the internet have brought these issues to a head.

The most important change to the ITRs is probably the proposal to include a reference to “internet traffic termination” to the existing definition of “International Telecommunications Services” in Article 2.2. Other proposed modifications refer to “VoIP” (Article 3.1, International Network), “Internet traffic and data transmission” as well as to “Internet and Internet Protocol” (Articles 4.2 and 4.3a, International Telecommunications Services, respectively).

Some proposals would also add new references to “international Internet connections” (Art. 3.7), the “internet” in a proposed new section 6.7 related to competition and interconnection issues and “measures to insure Internet stability and security” in 8.4.A (see Mueller on this point as well). References to  “cybercrime”, “data preservation, retention, protection”, “spam”, “identification”, “personal data protection” in new sections of Article 8 also have the internet clearly in their sights. I will examine some of the potential implications of these proposed changes and additions in more detail in the next two posts.

For now, however, my third argument is that things will not turn on the politics of language alone but the historical and contemporary practices of the ITU as well. In this regard, one things stands out that I think is determinative: the ITU has taken a broad, evolutionary view of its mandate and morphed with the times since its inception in 1865 (after the merger of two predecessor organizations — the Austro-German Telegraph Union (est. 1850) and the West European Telegraph Unioin (est. 1855), a point that will become important in the fourth post in this series) (see Drake, Introduction).

Originally called the International Telegraph Union, the ITU added telephones to its remit in the 1880s, radio in the early-1900s, and other new telecom technologies as they evolved. Its name was changed to the International Telecommunication Union in 1932 to reflect it broad and evolutionary view of the terrain. It’s Constitution, Decisions, Resolutions and Recommendations (DRRs) and the ITRs make a virtue out of the development and use of new telecom technologies, so it would be a real mystery to find a line drawn in the sand between telecoms before the internet and after, with the ITU confined strictly to the stuff that came in the past.

More recently, the ITU has been keen to carve out a distinct role for itself in regard to the internet since, at least, 1996, arguably earlier if we look back to the 1970s and 80s infatuation with ‘super-pipe’ models of integrated broadband media, even if the internet had not yet become a household name.  Its guts were nonetheless being put into place. And it is important to note that even the technical guts of the internet were not all made in America, as the paper by Google’s lawyers Patrick Ryan and Jacob Glick states. The UK, France and other parts of Europe were also involved, and the ITU was part of those efforts (Abbate, 1999; Mansell, 1993).

Yet, let’s take 1996 as the starting point because that is when the ITU and ISOC worked hand-in-glove in a bid to shift control over the domain name system from the U.S. to the ITU.  “The U.S. squashed that effort like a bug”, as Mueller states. Two years later, in 1998, the U.S. government created ICANN, where things have rested ever since.

Whereas Mueller sees just a long line of losses confirming that the ITU has no business in the internets of the world, I look past whether or not it has ‘won’ or ‘lost’ vis-a-vis the U.S. to see a long track record of practices that have evolved with the times. Thus, in the case of the internet, two years after the dispute over DNS, the ITU reaffirmed its commitment to cooperating with ISOC and IETF on global internet policy issues (DDR, Res. 102). It staked out matter-of-factly that it has a role to play “with regard to international public policy issues pertaining to the Internet and the management of Internet resources, including domain names and addresses” (DRR, Res. 102).

The two phases of WSIS between 2002 and 2005 also saw unprecedented participation by academics and civil society groups with the ITU in trying to imagine and map the frontiers of global internet policy. At the end of the three year process a new entity was born, the Internet Governance Forum (IGF), loosely under the direction of the United Nations, and with the ITU firmly within it alongside the rest of the ‘multi-stakeholder internet governance’ interests (ISOC, IETC, ICANN).

The IGF’s initial five year experimental period was renewed for another five years in 2010. All of this is important, too, because even if the ITRs do not currently refer to the internet, the ITU’s record of Decisions, Resolutions and Recommendations is chok-a-blok full of explicit and expansive references to the internet (see, for example, Resolutions 101, 133 & 179). Looking beyond the ITRs, therefore, we find a track-record of language on the internet that maps onto the ITU’s historical involvement with this domain since the late-1990s.

If the ITU has been such a loser with respect to global internet policy, and really has no place in it, as so many have argued (or just assumed) (Ryan & Glick; all but ISOC panellist Sally Wentworth at  U.S. congressional hearings on the so-called “International Proposals to Regulate the Internet” last month, etc.), it has been hiding in plain site. I think a better view of the matter is that, by dint of definition and a long history of evolution as well as contemporary practices, the ITU has a legitimate role to play in global internet policy.

Whether it exercises this role wisely or badly, however, is a different matter altogether, and which we will turn to in the next post.

Next Post: The ITU has been a business and market-dominated institution, not State-controlled, since the 1980s, maybe forever.

Big New Global Threat to the Internet or Paper Tiger?: the ITU and Global Internet Regulation, Part I

Over the past few weeks, a mounting number of commentators in the U.S. have pushed a supposed new threat to an open internet into the spotlight: the International Telecommunications Union (ITU).

According to those raising the alarms, preparations to revise the ITU’s international telecommunications regulations (ITRs) at a meeting this December are being hijacked by a motley assortment of authoritarian countries, legacy telecoms operators, as well as the BRIC (Brazil, Russia, India and China) and other developing countries. Their goal? To establish “international control over the internet”. Indeed, the issue is deemed so serious that congressional hearings on “International Proposals to Regulate the Internet” were held in the U.S. at the end of last month.

There seem to be three main claims behind the charge.

The first is that the ITU currently has jurisdiction over telecommunications, but not the Internet. As a paper by Patrick Ryan and Jacob Glick, two lawyers at Google, asserts, “modifications to the . . . ITRs are required before the ITU can become active in the Internet space”.  Vint Cerf, Google’s “chief internet evangelist”, similarly chastised the ITU’s “aims to expand its regulatory authority to the Internet” in an op-ed piece for the New York Times, and in front of the just-mentioned congressional hearings a week later.

Indeed, according to FCC Commissioner Robert McDowell, the idea that the ITU already has any role with respect to the internet is just nuts. Only pariah governments such as “Iran argue[] that the current definition already includes the Internet”, he asserts.

Milton Mueller more sensibly argues that the line between basic telecom and enhanced information services like the internet developed in the U.S. over the past half-century and subsequently trampolined onto the global stage during the 1990s leads to the same conclusion: so far as the ITU’s authority is concerned, basic telecoms are within its orbit, enhanced information services like the internet are out.

Indeed, as one of the critics leading the charge, Eli Dourado told me from his perch at the Mercatus Centre/Technoliberation Front/Cato Institute in a Twitter exchange the other day, nobody was thinking about the internet back in 1988 when the ITRs were last revised and updated. As a result, he says, “no internet traffic is governed under the original treaty.  Right now, 90% plus of global communications are not governed by the ITRs. This would change that”.

In sum, if the critics are right, the ITU’s gambit to draw the internet into its orbit would be a huge change from the status quo. But are they right? I do not think so and will come back to why further below after laying out the two other main criticisms.

The second key focus of critics is that the “ITU is a “closed organization” beholden to “state-run telecom monopolies”, as Ryan and Glick say. Fowler calls the proposed changes to the ITRs an attempt to impose “a top-down, centralized, international regulatory overlay [that] is antithetical to the architecture of the Net, which is a global network of networks without borders”.

According to this view, the ITU is a government dominated, telegraph-era dinosaur that is ill-suited for global internet policy, where markets, private actors and contracts and a variety of multi-stakeholder interests, including ISOC, ICANN, IETF, W3C, and other civil society groups work in ways that are open, consensus oriented, and inclusive.  The same point was made by David Gross, former Coordinator for International Communications and Information Policy, U.S. Department of State, and now head of the WCIT Ad Hoc Working Group made up of a whose who of telecom, media and internet titans: AT&T, Cisco, Comcast, Google, Intel, Microsoft, News Corp., Oracle, Telefonica, Time Warner Cable, Verisign and Verizon.

The secrecy and lack of transparency and civil society participation is the main concern of open internet advocacy stalwarts such as Public Knowledge, EFF, Centre for Democracy and Technology, and ISOC. A letter form CDT and thirty-two other internet advocacy groups calls on the ITU to “Remove restrictions on the sharing of WCIT documents and release all preparatory materials. . . . Open the preparatory process to meaningful participation by civil society . . .; and for Member States, open public processes at the national level to solicit input on proposed amendments to the International Telecommunications Regulations . . .”.

To help speed along this process, a new Wikileaks-style site, WCITLeaks.org, has also been set up to collect and make available documents leaked by ITU insiders, with some good results already in just the first few days.

The third argument is the “Trojan Horse” argument. From this angle, an ‘axis of evil’ authoritarian states – Russia, China, Iran, Saudi Arabia, Syria — are using the ITU as a vehicle to turn their closed models of national internet spaces into a global standard.  One paper after another points to a smoking gun that supposedly reveals the ITU’s end-game: a transcript of a conversation between the head of the ITU and Russian President, Vladimir Putin in which the latter waxes on about the need to establish “international control of the internet through the ITU”.

The model supposedly being ushered onto the world stage through the ITU is not the well-known Chinese model of internet filtering and website blocking, but a new “Trusted Web 3.O”. In the Web 3.0 model, authoritarian states use filtering and blocking techniques to deny access and (1) establish national laws that put such methods on a firm legal footing, (2) carve out a distinctive national internet-media space dominated by national champions (Baidu, Tencent, Yandex, Vkontakte) instead of Google, Facebook and Apple, within which (3) the state actively uses ‘internet-media-communication’ campaigns (propaganda) to shape the total information environment (See Deibert & Rohozinski, ch. 2).

Obviously, if the critics are right, there’s a lot more at stake in the WCIT than just bringing rules last revised in 1988 before the internet was even well-known up-to-date. There is, indeed, much at stake with the proposed revisions and much that is quite nasty within the rules themselves and how the ITU itself approaches global telecom and internet policy. Yet, as Mueller notes, while the critics’ focus on internet control and censorship by nasty governments might play well to their base, their claims are overblown and misrepresent the nature of the problems at hand. I agree with Mueller on this point, but also disagree with him on a few significant points as well, as we will see.

Over the next few posts I will offer, first, a post that lays out my criticism of the critics and, second, another that hones in on both proposed changes to the ITRs and elements that look like they will be retained and perhaps expanded on that I think are deeply problematic and genuinely a threat if not to the global internet, to the people living within countries whose practices would obtain the imprimatur of legitimacy from the ITU if they are accepted at the WCIT in December. Finally, I’ll offer an argument as to why the ITU should be reformed and retained rather than scrapped.

The crux of my criticisms are as follows: (1) that the ITU has always had a role with respect to the internet by dint of the expansive definition of telecommunications governing its operations; (2) that the battle over whether the ITU’s approach to global telecom and internet policy would be driven by the state or the market was settled decisively in favour of “the market” in the 1980s and 90s; (3) that while the ITU has a role in telecom and internet policy, its role has been increasingly neutered by the shift to the WTO and the ‘multi-stakeholder internet governance model’ since the 1990s; and finally (4) that the non-binding nature of ITU rules and principle of national sovereignty underpinning them means that the ‘axis of internet evil’ cannot use the ITU as a Trojan Horse to impose their Web 3.0 model on the rest of the world.

After I lay out these criticisms, in the next post I intend to dig deeper into the details of the ITU’s Constitution, Decisions, Resolutions, Recommendations as well as the ITRs and proposed changes to them. I will do so in order to reveal that, in fact, there are aspects of the ITUs global telecom and internet policy regime that are deeply problematic and, indeed, wholly unworthy of whatever legitimacy might be brought their way by being associated with the ITU and, by extension, the UN.

In this respect, I will hone in on: (1) how people’s right to communicate (Art. 33) clashes with rules that allow the state to cut off and/or intercept communication in cases that “appear dangerous to the security of the State or . . . to public order or to decency” (Arts. 34&37); (2) proposed changes to ITRs by the European Telecommunications Network Operators (ETNO) that legitimize the pay-per model of the internet and thus threaten network neutrality (Art. 3); (3) existing aspects of Article 8 of the ITRs and proposed changes relating to cybercrime, national security, whistle-blowing, user identities and anonymity that are odds with privacy norms outlined elsewhere in the ITU framework (e.g. Article 37 on the Secrecy of Telecommunications) and which put the interests of the state well above those of the individual.

Finally, I will make an argument as to why the ITU, after thorough-going reforms, is still a useful and desirable organization, building on the following arguments:

(1) it is already working within the ‘multi-stakeholder internet governance regime’ through the Internet Governance Forum established in 2005 and serious questions exist about U.S. hegemony over, in particular, ICANN (as illustrated by the U.S. government’s targeting of domain name resources to cripple Wikileaks, take-down foreign websites accused of violating U.S. copyright laws (see Rojadirecta case) and recent legislative proposals to formalize such tactics in SOPA);

(2) proposed changes adding elements of consumer protection with respect to mobile roaming charges and contracts as well as with respect to evaluating concentration in telecom and internet markets at the global and national level are worthwhile; and

(3) it’s broader remit reconciles global markets and technology on the one hand with broader norms related to the right to communicate, development and other important human rights and freedoms, on the other, that are entirely absent from the one-sided, market-driven model of globalization represented by the ITU’s closest counterpart, the WTO.

The Twitter – Wikileaks Cases and the Battle for the Network Free Press, Now its Personal: an Afternoon with Birgitta Jónsdóttir

A week-and-a-half ago I met up with Birgitta Jónsdóttir, an activist Icelandic MP and central figure in the Twitter-Wikileaks cases (see earlier posts on the topic hereherehere and here). Passing time on Twitter, I saw she was in Ottawa, sent her a tweet, quickly received a reply and presto, we met on a Sunday afternoon with a fellow professor from Ottawa University, Patrick McCurdy.

Jónsdóttir came to our attention after becoming a target of the U.S. Justice Department’s ongoing investigation of Wikileaks because of her role as co-producer of the video Collateral Murder. The video documents a U.S. Apache helicopter gunning down two Reuters journalists and several others in Baghdad and was released by whistle-blowing website Wikileaks in April 2010. It marked the beginning of the site’s campaign to release what would be the largest cache of US classified material the world has ever seen.

Over the course of 2010, Wikileaks teamed up with five of the world’s most respected news outlets — New York Times, The Guardian, Der SpeigelLe Monde and El Pais – to release material that wreaked havoc with the routine conventions of journalism and to set the global news agenda not once, but three more times: (2) the release of the Afghan and (3) Iraq war logs in July and October, respectively, and (4) a cache of diplomatic cables starting in late November.

The response from the U.S. Government was ferocious. The search to find the source of the leaks quickly led to the arrest of U.S. Army intelligence analyst, Bradley Manning, in May 2010, and his detention in solitary confinement ever since. Simultaneously, it began shaking down popular U.S. search and social media sites such as Twitter, Facebook, Skype and Google in a bid to gain access to information about people of interest in the Wikileaks investigation.

Birgitta is one of those people, along with Wikileaks front man Julian Assange, Tor developer, activist and Wikileaks volunteer, Jacob Applebaum, as well as Dutch hactivist Ron Gongrijp. Let’s call them the “Twitter –Wikileaks Four”.

Entering this murky world of state secrets, blacked out documents and unnamed internet companies cooperating with electronic surveillance efforts by the state offers a rude slap to anyone who sees the U.S. as a beacon of democracy, human rights and the free press. In fact, such values seem to have wilted with alarming ease in the face of the national security claims surrounding Wikileaks, and Birgitta Jónsdóttir specifically.

Talking to Jónsdóttir gave us a personal look behind the cool, technical view found in legal briefs and court rulings. And one of the first things she told us is that she no longer sets foot on U.S. soil on the advice of her lawyers and Iceland’s State Department, despite having diplomatic immunity on account of being a Member of Parliament in Iceland. Still embroiled in the Wikileaks cases, she has recently joined a lawsuit launched by Noam Chomsky, Noami Wolfe, Christopher Hedges, Daniel Ellsberg, and others to overturn the National Defense Authorization Act on the basis that its vague definition of terrorists threatens to sweep up dissidents into its maw, thereby threatening their ability to travel freely in the US and worldwide without fear of being arrested.

That we know much at all about how internet companies have been dragooned into the crackdown on Wikileaks is due to the fact that Birgitta, Applebaum and Gongrijp have led the fight with legal support from the American Civil Liberties Union and Electronic Frontier Foundation against such activities in the courts of law and public opinion (Assange has kept his focus elsewhere). And it is for this reason that The Guardian newspaper last month put Birgitta, Applebaum, Twitter’s chief lawyer, Alex MacGillivray, and Assange on its list of twenty “champions of the open internet”.

MacGillivray made the list primarily because only Twitter had the spine to challenge the Justice Department’s “secret orders” (not “court authorized warrants”), whereas all of the other search and social media companies rolled-over and shut-up. This was not just a one-time stance, either. This week Twitter was at it again, pushing to have a court order forcing it to hand-over information about an Occupy Wall Street activist to New York Police over-turned.

Twitter won a modest victory in January 2011 in the first Twitter – Wikileak case when it obtained a court order allowing it to tell Jónsdóttir and the others that the Justice Department was demanding information about their accounts as part of its Wikileaks investigation. The victory also opened a bigger opportunity to discover what other internet companies may have received the Justice Department’s secret orders.

Whatever hope was raised by the first Twitter – Wikileaks ruling was dashed by a District Court ruling in the second case last November, however. The ruling was blunt: users of corporate-owned, social media platforms have no privacy rights.

Using the same logic subsequently used in the “Occupy Wall Street” case, the court argued that Jónsdóttir et. al. had no privacy rights because Twitter, Skype, Facebook and Google’s business models are based on maximizing the collection and sale of subscriber information. Under such conditions, users alienate whatever privacy rights they might otherwise claim. As the ruling put it, Jónsdóttir and her co-defendants “voluntarily relinquished any reasonable expectation of privacy” by clicking on Twitter’s terms of service (p. 28).

With privacy reduced to the measuring rod of corporate business models and a perverse interpretation of its terms of service, Twitter was forced to hand over Jónsdóttir, Applebaum and Gongrijp’s account information to the Justice Department: registration pages, connection records, length of service, internet device number, and more.

A last ditch appeal was made by lawyers at the ACLU and EFF last January to reveal which other internet companies had received “secret orders” from the Justice Department. While no one knows for certain who they are, all eyes are on Google, Facebook and Skype (Microsoft). A decision is expected by the end of June, but Jónsdóttir isn’t holding her breath.

Pausing to reflect on the personal affects of the Twitter – Wikileaks cases overall, however, she remains upbeat rather than down-trodden.

“You have to completely alter your lifestyle. It’s not pleasant, but I don’t really care. . . . It’s just insults my sense of justice . . . . I would not put anything on social media sites that . . . I don’t want on the front pages of the press.”

Rather than dwelling on the costs to her personally, however, Jónsdóttir is quick to tie these events into a larger, more daunting picture. In doing so, she wants to prick the fantasy of Obama as a great liberal president and the illusion that the U.S. turned a corner after he replaced Bush as President.

As she reminds us, the Twitter – Wikileaks cases occurred on Obama’s watch. The Obama Administration has charged more whistle-blowers (six) than all past presidents combined (three), she offers (also here).

To this, we can add that revisions to the Foreign Intelligence Security Amendments Act in 2008 gave retroactive immunity to companies and ISPs such as AT&T and Verizon for the illegal network surveillance activities they conducted under the Bush regime, with few barriers now standing in the way of their continuing in that role under Obama (see here and here).

These concerns are crystallized in the latest Reporters Without Borders’ Press Freedom Index showing that press freedom in the U.S. plummeted from 20th to 47th place between 2010 and 2011. In short, the national security state after 9/11 has not been rolled back but kept intact. Jónsdóttir experience, she wants us to know, is not a fluke.

Glenn Greenwald has made a similar case that positions Wikileaks as being part and parcel of a new kind of journalism that mixes crowd sourcing, the internet and professional journalism. After a recent talk in Ottawa co-hosted by the National Press Club, he also mentioned that Wikileaks had invited journalists to use its material long before all hell broke loose in 2010, but it was the lure of exclusive access in their respective home markets that finally enticed The GuardianNew York TimesDer SpeigelLe Monde and El Pais to the table.

In other words, it was the pull of exclusive rights and private profit, not a good story, that brought the press to Wikileaks’ table, and it into the journalistic fold. And seen in that light, Wikileaks serves as a much-needed corrective to lazy and cautious journalism.

Harvard University law professor Yochai Benkler makes a similar case but in a much more systematic and constitutionally grounded way. He also shines a light on how the network free press is being subject to death by a thousand legal and extra-legal cuts when what we need is a strong press to counter the power of the strong state if democracy has a hope in hell of surviving, let alone thriving.

Benkler argues that attempts to bring Wikileaks to heal have involved a dangerous end run around Constitutional protections for the networked fourth estate, i.e. the First Amendment. Pressure from Senator Joe Lieberman, Chair of the Senate Committee on Homeland Security and Governmental Affairs, for instance, led webhosting provider Amazon, domain name supplier everyDNS and financial payment providers (Paypal, Visa, Mastercard) in December 2010 to withdraw internet and financial resources that are essential to Wikileaks’ operations to exemplify the point.

While government actors are prevented from such actions by First Amendment protections for the press, Lieberman used commercial actors who were, Twitter aside, all-too-willing to serve the state on bended knees, and a campaign to denigrate Wikileaks journalistic standing, to do an end run around such Constitutional restraints. Such actions eliminated the routine financial channels (Paypal, Visa, Mastercard) through which an estimated 90 percent of Wikileaks donor funding flows, and to scramble to find a new domain name provider and webhosting site.

Now of course, some argue that Wikileaks has nothing to do with journalism and the free press. They are wrong.

Remember, it set the global news agenda repeatedly in 2010, mostly by working hand-in-glove with the world’s leading newspapers to edit and publish stories. It has won oodles of journalist awards before and after these events, as the following partial list shows: Economist – Index on Censorship Freedom of Expression award 2008; Amnesty International human rights reporting award (New Media), UK 2010; Human Rights Film Festival of Barcelona Award for International Journalism & Human Rights, 2010; International Piero Passetti Journalism Prize of the National Union of Italian Journalists, Italy 2011; Voltaire Award of the Victorian Council for Civil Liberties, Australia 2011; Readers’ Choice in Time magazine’s Person of the Year (Julian Assange) 2011. The honorifics bestowed on the “Twitter Wikileaks Four” by The Guardian, also referred to earlier, adds yet another.

Awards are nice, and the recognition helps to bestow legitimacy, Jónsdóttir observes, but the real key is to keep pushing the envelope. To that end, she updated us on the Icelandic Modern Media Initiative (IMMI) that she and others have spearheaded since the initiative’s birth in the Icelandic Parliament in July 2010. IMMI is, in short, a “dream big” project designed to make Iceland a digital free media haven where whistle-blowers are protected by the highest legal standards in the world and the value Net Neutrality formally incorporated into the country’s new Constitution that now awaits Parliamentary ratification.

Thus, as she rails against powerful forces on the global stage, Jónsdóttir is helping to build in Iceland a model of information rights, privacy and free speech for the world.

These are important things, she says, because they are all about our history and about making democracy fit for our times. In terms of history, and reaching for the right words, she points to the importance of Wikileaks as

“part of the alchemy of what is going on in the world. . . . The Iraq and Afghan war logs changed how people talk about the wars. It has provided us with a very important part of our record, our history”.

As for democracy, “voting every four years is absolutely not democracy, it is just a transfer of power”, Jónsdóttir exclaims as our conversation draws to a close. Of course, the rule of law, an open internet, and fighting against the strong state are essential, but these are abstractions unless they are made personal and concrete.

Hmmm, the battle for the open internet and network free press, now its personal. That seems like a great way to think of Birgitta, and our long afternoon together last week.

Should ISPs Enforce Copyright? An Interview with Prof. Robin Mansell on the UK Case

Should Internet Service Providers (ISPs) be legally required to block access to websites that facilitate illegal downloading and file sharing sites or cut off the Internet connections of those who use such sites?

In Canada, the answer is no, and recently proposed legislation expected to be re-introduced soon, Bill C-32, the Copyright Modernization Actwould not change this state of affairs, despite all the other flaws that it might have (see here for an earlier post on the proposed new law).

That’s not the case in a growing number of countries, notably the United Kingdom, New Zealand, France, South Korea, Australia, Sweden and Taiwan. Indeed, after pushing hard for the past decade to get stronger, broader and longer copyright laws passed, as well as using digital rights management to lock content to specific devices, in 2008 the IFPI (International Federation of Phonographic Industries) and the RIAA (Recorded Industry Association of America) turned to giving first priority to the idea that ISPs should be legally required to block ‘rogue websites’ and adopt “three strikes you’re out measures” that cut off the accounts of Internet users accused repeatedly of illicitly downloading and sharing copyright protected content online.

While not formally required by law to do so, Canadian ISPs such as Bell, Rogers, Shaw, Cogeco, Telus, Quebecor, etc. have agreements with the recorded music industries and other “copyright industries” to disable access to illicit sites. Moreover, the Terms of Service/Acceptable Use Policies explicitly state that they reserve the right to do just this.

Exactly what the conditions are, and how often they are use, well, who knows? The arrangements, as I just said, are informal — something of a blackhole rather than an open Internet, essentially.

As Rogers Acceptable User Agreement explicitly states, for example:

“Rogers reserves the right to move, remove or refuse to post any content, in whole or in part, that it, in its sole discretion, decides   . . . violates the privacy rights or intellectual property rights of others” (“Unlawful and Inappropriate Content” clause”. (also see Bell’s Acceptable User Policy, p. 1)

So, it is not that Canada is some kind of “free Internet” zone, but rather one where there terms are set privately by ISPs (our major TMI conglomerates) and the “content industries”. This seems like a really bad idea to me.

The UK adopted an even worse approach, however, by giving such measures the force of law when it passed the Digital Economy Act in 2010, a law that was sped through Parliament in near-record time (i.e. 2 hours debate) after incredible levels of lobbying from the music, film and broadcasting industries (see here). Two major ISPs in the UK, however, BT and TalkTalk, have fought these measures tooth and nail, but have suffered a series of defeats in the courts.

I recently spoke with Professor Robin Mansell, who took part in these proceedings as an expert witness on behalf of BT and TalkTalk. Her experience sheds much light on the potential impact of these measures on the evolution of the Internet and Internet users. I also asked her about the tricky role of academics in such cases, given that being an expert witness essentially bars you from discussing details of the case, a position that obviously clashes with academics’ obligation to make knowledge public.

Professor Mansell is a Canadian who completed her Ph.D. at Simon Fraser University. She is a Professor of New Media and the Internet at the London School of Economics, where she was Head of the Media and Communications Department (2006-2009). She has been a leading contributor to policy debates about the Internet, the Information Society, and new information and communication technologies. She was also President of the International Association for Media and Communication Research (IAMCR) (2004-2008) and has served as a consultant to many UN agencies as well as the World Bank. You can learn more about her here.

Although the Court of Appeals rejected BT and TalkTalk’s challenge to the Digital Economy Act in June, several other developments in the UK since May have kept the issues on a high boil and still unresolved:

  1. The Hargreaves Report published in May was scathing of the lack of evidence underlying the development of copyright policies, and how “lobbynomics” rather than evidence has been driving the policy agenda (for an earlier blog post on the report, see here);
  2. Another High Court decision in July required BT and other ISPs to block access to the site Newzbin;
  3. The Government decided to adopt all of the proposals in the Hargreaves Report in August;
  4. The measures in the Digital Economy Act requiring ISPs to block illegal file-sharing sites were put on hold in August after a report by the British telecom and media regulator, Ofcom, found that the measures would be unworkable (also here).

Dwayne: How did you become an expert witness in the BT/TalkTalk challenge to the Digital Economy Act? And who was backing the adoption of these measures?

Professor Mansell: I was invited by BT’s Legal Division to do so.  They came to me on the recommendation of another academic who was serving as an advisor to the regulator, Ofcom, and so could not do it for conflict of interest reasons.  They also invited Prof. W. Edward Steinmueller, University of Sussex, to work with me, since he is formally trained as an economist and could take on the ‘copyright economist’ from the US who was expected to appear on behalf of the creative industry actors who have pushed so hard for the law.

The key players arrayed against BT and TalkTalk, in addition to the Government, included the following members of the ‘creative industries’: the British Recorded Music Industry Association, the British Video Association, the Broadcasting Entertainment Cinematograph and Theatre Union, Film Distributors Association, Footabll Assocation Premier League, Motion Picture Association, the Musicians Association, Producers Alliance for Cinema and Television and Unite. The Open Rights Group, somewhat similar to Open Media in Canada, also filed an intervention that, essentially, supported BT and TalkTalk’s position, but from a basis steeped more in open Internet values rather mainly business considerations.

I have training in economics, but no formal degree as mine are in Communication (Political Economy) and Social Psychology.  As far as we know we were the only academics hired by BT/TalkTalk to participate in the High Court Judicial Review of the Digital Economy Act (DEA) 2010.

We realised we would be bound by confidentiality once we signed on.  In the UK, our initial report challenging the measures set out in the Act came into the public domain after the judgement, but not the evidence submitted by the creative industry players against the BT/TalkTalk case or our rebuttal to that.

We had both worked and published on issues of copyright before and felt that there was a chance that the Judge might rescind the Act – a small one, but we thought it worth trying. This was the only way we could see that the provisions of the Act might be overturned since it had got on the books in the last days of previous Labour Government.

In the event, the Judge decided that the DEA should be implemented for two main reasons 1) there is no empirical evidence of what its impact will be from anyone’s perspective – just claims and counterclaims; 2) it is for Parliament to decide how copyright legislation balances the interests of the industry and of consumer/citizens, not for the courts.  BT/TalkTalk appealed the decision and lost again.

Dwayne: What implications does the most recent court set-back have for principles of open networks/network neutrality, copyright, privacy and user created content (UCC)?

Robin: The central issue in this case was whether the ‘graduated response’, or ‘three strikes you are out’, strategy being lobbied for by the creative industries to curtail online P2P file-sharing that infringes copyright is a disproportionate response to file-sharing practices that are ubiquitous.  Another issue was also whether the implementation of the measures by ISPs (with a court order) is likely to have a chilling effect on the way people use the Internet.

From the copyright industry point of view, the central issue was whether the government and ISPs would support their argument that this strategy is essential to their ability to stem the losses they are experiencing in the music, film and broadcast programming sectors which they attribute to infringing downloading by individual users – and more importantly to enable them to recover the lost revenues, or at least some of them. The creative industries players argued that it was essential for ISPs to play an active role in stemming the tide of copyright infringement.

The bigger issue of course is whether P2P file sharing is simply indicative of one of many ways in which Internet users are finding creative ways of producing and sharing online content in a ‘remix’ culture where the norms and standards for good behaviour online have changed enormously and with little evident regard amongst some Internet users for existing copyright provisions. In the face of these changes, the incumbent creative industry companies are seeking ways of extending their control over the use of copyrighted digital information in many ways, just one of which is stronger enforcement of copyright legislation which currently makes it illegal to copy even for non-commercial purposes of private use and creates a narrow window for licensing for educational use.

BT/TalkTalk framed the issues mainly in terms of the threat to their own business interests in terms of reputational and financial costs if they are required to divulge the names of their subscribers to the creative industry firms (albeit with a court order) when they are accused of infringing copyright.

We framed the issues in four ways:

  1. the disproportionality of the DEA response in light of changing social norms and behaviours online which means that there is little if any evidence that the threat of punishment will change online behaviour;
  2. the disproportionality of the response because it sets a wide net that is very likely to encompass those who use ISP subscribers’ access to the Internet (family, friends, users at work, in public places, etc.) for purposes of which the subscribers themselves have no knowledge;
  3. the lack of disinterested evidence on industry losses and revenue recovery since all the quantitative evidence is based on creative industry data or on studies which are flawed in terms of methodology; and
  4. the implications for trust and privacy when Internet users are being monitored for this purpose.

In this specific case, the arguments did not tip over into debates about network neutrality, but they easily could have. The techniques that are used to monitor subscriber online activity go in the direction of the same deep packet inspection techniques that also enable ISPs to discriminate among different types of Internet traffic.

However in this case, they were only being asked to provide subscriber information based on the monitoring performed by firms hired by the copyright industry firms themselves to monitor spikes in volume and the sites from which downloading occurs. This doesn’t go directly to what ISPs themselves are doing or not doing with respect to monitoring types of traffic, so technically isn’t about network neutrality. The ultimate effect, however, is not all that dissimilar.

Dwayne: You have mentioned for two years running now during talks at IAMCR that the role of ‘expert witness’ is a double-edged one, on the one hand allowing scholars a seat directly at the table while on the other hedging about the scholar’s role with all kinds of requirements about the nature of the facts and evidence that can be submitted, non-disclosure agreements, etc.

Can you elaborate a bit more on this conundrum? What would be your advice to those torn between the ‘expert witness’ and ‘activist’ scholar role?

Professor Mansell: This issue is always on my mind!  The role of an ‘expert witness’ in a court case can vary a lot depending on the jurisdiction. In the UK you can end up knowing quite a lot more as a result, but you also cannot write about it in an academic way because you cannot cite the sources which remain confidential even after the case is over. After the case is over of course you can argue as you wish retrospectively, but then ‘the horse has left the barn’.

Another issue is the problem of what counts as evidence.  The courts look for some kind of irrefutable quantitative evidence. Failing that they look for persuasive theoretical arguments about how the world ‘might be’, overlooking the unrealistic assumptions about how economic incentives work in the market or they look for generalisations from fairly flimsy empirical studies about what mainly US college student report about their own copyright infringing behaviour and future intentions.

The problem for the ‘expert witness’ is that while it is possible to refute the assumptions of theory and poorly conceived methodologies, it is not possible (usually) to present quantitative empirical evidence that is any more robust because it simply doesn’t exist.  It is possible to present good arguments (based on political economy, sociological or cultural analysis of changing norms, market structures and dominant interests, and power relations).  But if you know that the Judge is likely to be persuaded mainly by the economics arguments, one is not going to get very far.

Thus, the question arises as to why enter the fray in the first place? Why not work as an activist or work as an academic to influence the policy makers directly before the legislation gets on the books?

Both routes are needed, but time constraints often mean that they are hard to achieve in a consistent way.  And of course interacting continuously with policy makers raises its own challenges.  Not the least of these is that if they are setting the agenda and are already echoing the prevailing view that the balancing of interests in copyright protection is clear and unproblematic. It is a real uphill battle to depart from this view – and a strong likelihood that the door to the room or corridor where policy decisions are made will be shut.

In the case of copyright enforcement and the UK judicial review of the DEA, there are critical scholars in the community who could have been taken on by BT/TalkTalk and who are likely to have promoted the view that the whole of the copyright regime needs to be dismantled in favour of an open commons; they were not invited to participate by those setting the terms of engagement.

The Open Rights Group did participate in the judicial review as an intervener and their argument was quoted by the Judge, but this didn’t alter his view it seems.  In terms of the academic evidence, he basically said that this was a complex issue which should not be put before the courts.

Dwayne: The Court dismissed the challenge to the Digital Economy Act, finding that it was entirely within the purview of the UK Parliament to pass laws of this kind and to strike the balance between the competing interests in the way that it did. You described this as a total loss. Can you explain why and what the implications might be?

Professor Mansell: I think I said this because the Government claimed that the DEA is aimed at balancing legitimate uses of the Internet and freedom of expression against the costs of implementing technical sanctions against Internet users, assuming authorisation by the courts.

The Court accepted our argument about the ambiguity of the results of empirical studies of online user intentions and behaviours with respect to copyright infringement. It also accepted the argument that Internet users may take steps to avoid legal liability resulting in a chilling effect on the development of the Internet. But, it did not accept that such an effect would exceed the benefits of enhanced copyright protection.

Ultimately, it left it to Parliament to decide the appropriate weighing of the interests of the creative industries and Internet users, which the Government claims has already been done in the legislation.  So we go round and round …  the DEA enforcement legislation goes ahead and the copyright legislation it is designed to enforce stays in place – a ‘total loss’ (for now till the next round).

Meanwhile the creative industries as we know are experimenting with all sorts of new business models in their bid to change the way they raise revenues through the provision of digital content.  Perhaps the shear pressure of mass Internet user activity and infringing downloading will eventually give rise to fairer models – we can wait for this to happen, but it is a shame that the rights of these users are likely to be infringed and some will be punished for behaviour that one day may be seen as entirely appropriate and even welcomed!

We argued, that in light of uncertainty about the direction of change in social norms and behaviour online, legislation that seeks to suppress P2P file-sharing by bringing legal actions against individual infringers is likely to disrupt, or alter the course of, Internet development in ways that cannot be assumed to be benign. The evidence favours the interests of the rights holders and the interests of those engaging in infringing file-sharing are downplayed or excluded. This cannot be said to be a proportionate response to the incidence of infringing file-sharing.

Since the judicial review, an independent report commissioned by the Prime Minister (The Hargreaves Report) has emphasised the need for change favouring better access to orphaned works subject to copyright and copying for private and research purposes and greater emphasis on the impact of legislation on non-rights holders and consumers.  But, it still says that the DEA provisions for the ‘graduated response/three strikes you are out’ should go ahead until such time as there may be evidence that it is not working.  Again, the harms will already have occurred even if evidence shows that the measures are not working the way the industry claims they will and Internet users continue their infringing downloading activity.

Dwayne: Last question, Robin. Do you think that the recent moves by the UK government to adopt the Hargreaves Report in whole and to put aside ISP blocking requirements change the picture?

Professor Mansell: There is a difference between the provisions in the DEA to go after individual file sharers through the ‘Graduated Response’ tactic, which is going ahead, and the concerns expressed by ministers as to whether they can get ISPs to take down the big enabling sites.  My understanding is that is the issue under discussion.

Some of the other Hargreaves recommendations may well start to go ahead – we will see how quickly, but they do not go to the specific issue of using ISPs to help bring charges against individuals.  



%d bloggers like this: