Archive

Author Archive

Press Pause: Why the CRTC Should Delay the Bell-Astral Round 2 Hearings

My column for the Globe and Mail today argues that the CRTC should take it’s time before putting the 2nd set of hearings into Bell’s proposed acquisition of Astral Media in motion.

The column was prompted by comments made a few weeks back, when BCE indicated that it had hope the Canadian Radio-television and Telecommunications Commission might give special fast-track treatment to its bid for Astral Media now that we’re going over things for the second time, “abbreviated hearings” it called them.

The CRTC should do nothing of the sort and, in fact, hold off for a while before doing anything at all, because the tools the regulator will rely on to assess the transaction are not up to the task.

This second-kick-at-the-can strategy that BCE wheeled out after the CRTC first rejected the deal last October (see here , here and here), is highly unusual. To the best of my knowledge, nothing like this has ever been done before. There is nothing routine about this transaction and, thus, it is hardly worthy of being fast-tracked.

Not least because the thresholds set in the CRTC’s 2008 Diversity of Voices decision (see para 87) are fundamentally flawed, and should be scrapped and new ones put into place before any review of media ownership transactions on the scale of the Bell-Astral deal gets out of the gates.

The oft-repeated idea that any merger or acquisition should automatically be approved if it results in the combined entity having under 35 per cent of the total TV market creates more problems than it solves (see here, here and here).

The 35 per cent guideline was imported from the standards set by the Competition Bureau in 2003 for reviewing mergers and acquisitions in banking, and form a weak standard when it comes to media diversity. Rules for banks balance competition with the stability of the national economy. Media concentration rules are about fostering the maximum amount of diversity feasible and a free press fit for democracy.

Even worse, adopting the ill-fitting 35 per cent guide, the CRTC cherry-picked the weakest half of the Competition Bureau’s two-part rule for assessing bank mergers.

The second part of the Competition Bureau’s guidelines suggests that there is a problem of market power when any merger or acquisition results in the top four firms controlling more than 65 per cent of the market. The share of the big four – Bell, Shaw, CBC, Rogers – today is already roughly 81 per cent for the total TV programming market – well-over the Competition Bureau’s standards. If Bell does get the green light to acquire Astral Media, it would rise to just under 90 per cent. This reason alone is enough to pause and reflect.

As the Competition Bureau clearly stated:

“If the sum of the merging firms’ pre-merger market shares is below 35 per cent, there are likely to be sufficient products and suppliers to which consumers can turn in response to any attempt by the merged entity to exercise market power. If the four-firm concentration level is below 65 per cent, then co-ordination among firms in the market is likely to be too difficult to raise competition concerns (para 47).”

Conversely, when a single firm’s combined market share tops 35 per cent its ability to exercise dominant market power is just too great, while when the top four control more than 65 per cent of the market, the potential for them too collude rather than compete vigorously in the marketplace becomes unacceptably high as well.

Also, the guidelines set out in the Diversity of Voices ruling did not anticipate the extent to which vertical integration would come to reign supreme across the entire sweep of the telecoms, media and internet in just a few years. When the new rules were created in 2008, Bell had sold down its controlling stake in CTV and was pretty much out of the TV programming business. The three vertically integrated conglomerates – Shaw, Rogers and Quebecor – at the time accounted for just 43 per cent of the total TV business (delivery and programming combined).

By 2011, Bell had returned to the fold by re-buying CTV; Shaw had bulked up by taking over Global from the bankrupt Canwest. Four vertically integrated telecom, media and internet giants now accounted for more than three quarters of the TV market: Shaw, Bell, Rogers and QMI, in that order. Toss Astral Media into the mix – the ninth largest media firm in the country – and the number rises closer to 80 per cent.

I am quite sure that former CRTC head Konrad von Finckenstein, never anticipated these conditions. A five-year-old 35 per cent threshold is no longer some kind of magic number upon which the Bell-Astral deal should turn come decision time.

We should also remember that not just the CRTC, but Canadians in general did not like the original Bell-Astral deal. In fact, 60 per cent opposed the deal.

Some may brush that aside as anti-capitalist populism, but the fact is, such a stance is the norm and when you probe the data further in such surveys, we find that the more educated the respondent is, the more likely they are to spurn any deal that appreciably changes the scales in favour of fewer choices and more concentration.

This is the impulse of a democratic culture. It should not be treated lightly, or dismissed with scorn.

It seems to me to be only prudent that the CRTC takes whatever time it needs to ensure that the tools it will use in Bell-Astral Round Two are up to the task. Until they are, Bell and Astral should step back and get in line rather than raising the possibility of fast-tracking this thing.

This isn’t just about Bell Astral; it’s about the rules of the road and ensuring that the media ecology in this country comes as close to embodying democratic ideals as is humanly and politically possible.

Voltage’s Shakedown of TekSavvy, Part III: the Fight for a Competitive and Democratic Internet

Over the past few weeks debate has roiled over Voltage’s mass copyright litigation scheme directed at TekSavvy users. This has many wondering whether the indy ISP has done enough to thwart the disclosure of subscriber identification linked to about two thousand IP addresses that Voltage alleges have been used to illegally share films and tv programs the company owns the rights for.

Here I want to add a few more thoughts to my previous posts on the topic (see here and here). The main aim is to provide a crisp distillation of the ultimate issues at stake, the benefits of TekSavvy’s approach so far and why I still believe that TekSavvy ought to directly oppose Voltage’s motion.

First, it is unequivocal that relative to what other ISPs in Canada have done, TekSavvy is in a league of its own. Other than Telus and Shaw in the precedent setting BMG case of 2004, only TekSavvy has raised as many hurdles to companies such as Voltage who seek to have ISPs turn over subscribers’ identification linked to IP addresses that are accused of being used for illegal file sharing purposes (see Howard Knopf’s posts on this point here and here).

In BMG, only Shaw and Telus led the charge against using ISPs as a means of getting to subscribers behind the IP addresses being sought. Videotron actively sided with the recorded music industry, while Bell and Rogers waffled. Fast forward to 2011, when Voltage launched a similar case (The Hurt Locker case), only to face zero opposition from the three incumbent ISPs targeted for the 29 IP addresses being sought: Bell, Cogeco and Videotron. Indeed, the three ISPs agreed to not show up in court at all.

Last year, Canadian film and tv producer and distributer NGN productions targeted four smaller ISPs, with much the same results: Distributel, Access Cooperative, ACN and 3 Web. All caved, and we hardly heard a peep about these events. Thus, compared to its counterparts, TekSavvy shines.

TekSavvy’s stance also lines-up well with international best practices and obligations of ISPs and digital intermediaries when it comes to protecting subscribers’ speech and privacy rights, as can be seen when we look at the Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression prepared for the United Nations Human Rights Council by Frank La Rue in 2011. As LaRue’s Report states,

To avoid infringing the right to freedom of expression and the right to privacy of Internet users, the Special Rapporteur recommends intermediaries to: only implement restrictions to these rights after judicial intervention; be transparent to the user involved about measures taken, and where applicable to the wider public; provide, if possible, forewarning to users before the implementation of restrictive measures; and minimize the impact of restrictions strictly to the content involved. Finally, there must be effective remedies for affected users, including the possibility of appeal through the procedures provided by the intermediary and by a competent judicial authority (para 76, page 20).

In short, TekSavvy’s actions not only shine relative to most of its Canadian counterparts, they appear to be in line with international norms regarding speech and privacy rights. Still, however, there are three points upon which we can still reasonably ask for more.

First, the LaRue report puts a lot of weight on proper legal proceedings taking place before any limits to speech and privacy are implemented. While TekSavvy has done much to make sure that such proceedings take place with a great deal of fanfare and plenty of time for the thousands of Jane and John Does implicated to be notified — all in line with what UN report has to say – we must ask whether or not the legal process that LaRue refers to would be better served if TekSavvy directly opposed Voltage’s motion?

That is what Telus and Shaw did when opposing the motion for disclosure in the BMG case and it is, as I’ve argued, what TekSavvy should do in the present case. Indeed, Judge Mandamin, who is overseeing last week’s proceeding in the Voltage motion, seemed to have exactly this in mind when he noted that hearing a motion from only one side is risky, and that complex technical issues required those with the best knowledge of such matters, i.e. TekSavvy, to step forward.

Second, we can look to elsewhere for cases where ISPs have actively opposed attempts to enroll them into the machinery of copyright enforcement.  Two of the largest ISPs in the UK, BT and TalkTalk fought tooth-and-nail, for example, against sections of the 2010 Digital Economy Act that did just this. While they lost, BT and Talk Talk’s opposition was part and parcel of a wave of opposition, including the influential Hargreave Report, that sent key planks of the Digital Economy Act back to the drawing board.

Another UK case – ACS Law – MediaCat (and here) – showed how important opposing copyright claimants’ bids to pursue mass litigation campaigns against alleged illegal file-sharers is to revealing the shoddy quality of the evidence that often stands behind such claims. Lastly, the Australian ISP, iiNet successfully fought back a push by a group of 34 movie studios, the Australian Federation Against Copyright Theft (AFACT), to have the ISP play an active role in enforcing their copyright interests. iiNet won the initial trial case in 2010, on appeal to the Federal Court in 2011 and again in the High Court last year. In short, ISPs actively and directly opposing motions by a variety of copyright claimants has beaten back the tide on many occasions (thanks to Australian lawyer, Leanne O’Donnell, for the tips regarding these cases).

Third, the standard for disclosing subscribers’ information set in the BMG case is weak. Indeed, the idea that the claims being made are done in good faith falls far short of the stronger standards associated with the requirement that those pushing such a motion make a compelling case that they have a good chance of winning in court.

If speech becomes one of the pivots upon which such things will turn, then the standard will become higher yet. CIPPIC plans to push these points if it gains intervener status, but I can see no reason why having both it and TekSavvy pushing at the oars in unison won’t strengthen the case for moving the weak standards of disclosure that have been in place since BMG, and arguably behind why many ISPs since then have simply folded in the face of motions for disclosure, to much higher standards, and especially standards that put speech rights in the front window.

Ultimately, it needs to be established once and for all that ISPs can’t be turned into agents on behalf of copyright claimants such as Voltage. This is essential given that the ink on the new Copyright Modernization Act is not even dry yet, leaving it ripe for interpretation, as Judge Mandamin noted.

TekSavvy now stands in the best position to do this having been forced into playing that role to oppose the enormous burden that this places on ISPs. TekSavvy has a chance to stick up for important values with respect to its subscribers’ anonymous speech and privacy rights, and it should. Sure, CIPPIC could do this, but CIPPIC’s interests, as I noted in my last post, are distinct both from TekSavvy and its subscribers.

Until the likes of Voltage are successfully challenged, these pillars — speech and privacy rights — of a democratic communication space, which the Internet certainly is a crucial part of, will lay fallow, resting more on the rhetoric of internet freedom rather than a sturdy legal foundation, or economic one, for that matter, if even good (in the normative sense) ISPs like TekSavvy keep taking a financial beating. In short, I hope that the occasion can serve to effect an interpretation of the law that (a) minimizes to the absolute least amount possible the role that ISPs (and other digital intermediaries) are forced to play as agents in the copyright enforcement machinery and (B) maximizes internet users’ speech and privacy rights.

The fact that TekSavvy has broken ranks with past practices by incumbent ISPs and others, who have rolled over and disclosed subscriber info in pretty much every case after BMG (except Telus and Shaw, in that case), it would appear, also demonstrates the importance of having as much diversity and competition in internet access as possible. A more competitive and diverse supply of internet access means that subscribers will be less vulnerable to a handful of players being shaken down by copyright claimants for their personal information.

Voltage’s TekSavvy Subscriber Shakedown, Part II: Big Win for TekSavvy or Room for More?

Yesterday, a Federal Court in Toronto decided to postpone Voltage Picture’s motion to have TekSavvy divulge subscriber identities linked to 2000 IP addresses that Voltage claims have been used to share its movies illegally. Does the result vindicate TekSavvy’s refusal to oppose the motion and mark at least a partial victory for its subscribers, as some are suggesting? 

My friend and colleague David Ellis makes an excellent case for why the answer is yes. As David sees things, far from caving, TekSavvy “was in fact working against Voltage on several fronts”. I’ve talked with several people with good knowledge of the case, thought long and hard about it, and while I agree with many of David’s points, I’m not convinced TekSavvy got the wins he thinks it did.

Let’s start on the positive side of the ledger though, because there is much to appreciate in what TekSavvy has accomplished thus far, with potential for more to come.

Standing Up for Subscribers

First and foremost, TekSavvy has dedicated many hours and, according to statements made in court, already spent $190,000 in legal fees and other costs fighting to ensure that its subscribers’ interests are properly accounted for. Besides David’s kudos for TekSavvy, CIPPIC’s director David Fewer is emphatic that the indy ISP deserves much praise for fighting strongly for its subscribers to be notified and more time to put together a proper legal defense.

Standing up for CIPPIC and the Public Interest

Second, TekSavvy has pushed hard to open space for CIPPIC, the public interest internet law and policy clinic, to gain standing in the case (more on this below). While nothing has been decided on this point, comments by Judge Leonard Mandamin suggest that CIPPIC will gain standing, as David’s post and live tweets from the court room by Paul Andersen and National Post reporter Christine Dobby, indicate.

Voltage argued strenuously against Teksavvy advocating on behalf of a role for CIPPIC. Its lawyer, James Zibbaras, argued the move to defer a ruling was just a delaying tactic to mask the fact that TekSavvy had no case. Justice delayed, would be justice denied, he claimed, because as the courts fiddled, Voltage’s movies would be ripped and burned across the planet. The judge was having nothing of it, however, and the matter was put on hold.

Starting Over: Letter from Voltage – Dear Fans

Third, TekSavvy’s counsel, Nick McHaffie, succeeded in getting Voltage to walk its scorched earth strategy back several steps. Whereas Voltage went straight for the subscriber identities linked to the 2000 IP addresses it has identified, this bypassed the usual first step in such cases: asking ISPs to politely send cease and desist letters to those allegedly engaged in illegal file-sharing, while using this as an opportunity to convert pirates into paying fans.

Voltage did none of that. A late in the game bid by McHaffie, changed this. As a result, Zibarras agreed to do just that, with McHaffie making it “very clear”, according to Ellis, “he intends to put language into the draft order that will protect the privacy of potential defendants”.

Compared to Other Canadian ISPs, TekSavvy’s a Saint

Fourth, TekSavvy’s efforts, as Jean-Francois Mezei put it in a perceptive comment to my last post, distinguishes the indy-ISP from others who have rolled-over and shut-up in two similar cases. In the first, also brought by Voltage in 2011, Bell, Videotron and Cogeco not only did not oppose the motion to disclose the identities linked to fifty IP addresses alleged to have illegally shared the movie The Hurt Locker, they didn’t bother to even show up in court. Despite winning the case, Voltage abandoned its claims last March and things came to a halt (also see here).

In another case late last year, four ISPs – Distributel, Access Cooperative, ACN and 3 Web – faced a similar motion by Canadian motion picture company, NGN Productions. Once again, all ISPs were missing in action, leaving their subscribers hanging in the wind (also see here and here).

At this point, I also need to clarify and correct a point I made in my last post: in the precedent-setting BMG case, far from all of the incumbent ISPs lining up against the record labels, only Telus and Shaw took the lead, while Bell and Rogers selectively and reluctantly joined the fold; Quebecor (Videotron) actively sided with the record labels (see CIPPIC’s archived materials).

In short, relative to most ISPs, TekSavvy is a saint, and should be applauded for walking the extra mile on behalf of its subscribers.

A Glass Half Empty/Full: What Else is a Good ISP to Do?

While TekSavvy has gone well-beyond the norms that prevail among Canadian ISPs, its stance still falls short of what is possible, not just in some fantasized world but against what seems achievable through the legal resources available as well as relative to best practices adopted both in Canada and elsewhere.

Delays May Be Useful, But Are Not a Legal Victory

The first thing to note is that even after spending $190,000, TekSavvy has not won anything yet in terms of a legal ruling other than two delays that allow others more time to get their houses in order. More to the point, it is still not opposing Voltage’s motion.

Standing Up for Privacy is a Real Option, even if not an Obligation

While discussion with others has led me to accept that Canadian law, and PIPEDA specifically, does not compel ISPs to take a stance on behalf of their subscribers’ privacy, the latter does give them the opportunity to do so. TekSavvy should take it.

That it has not stands at odds with best practices set by Telus and Shaw in the BMG case. Even Rogers, which otherwise waffled in the face of the record labels’ case at the time, agreed that ISPs are “obliged to protect . . . the privacy of their customers . . . by virtue of the Personal Information Protection and Electronic Documents Act (2000)”(para 13). This appears to be a moral position rather than a legally compelled one, but so be it if it aids in gaining a big win for subscribers’ privacy. After all, human rights are but empty legal shells if not moral rights, too.

CIPPIC is Not a Proxy for TekSavvy

While TekSavvy’s intervention has opened space for CIPPIC, the decision to defer a ruling on the motion does not guarantee it will be permitted to intervene. Even if it is, CIPPIC is not a proxy for TekSavvy but, as its request for intervener status states, it “brings an important public interest perspective to the proceedings, different from the Plaintiff, the Defendants and the non-party Respondent” (emphasis added).

As CIPPIC director David Fewer told me, CIPPIC’s first role, if it is granted intervener status, will be to underscore the importance of the right to anonymous speech online, with judges functioning as the safety valve in determining when such rights must yield to more pressing public policy concerns such as hate speech, defamation and copyright (see dayna boyd for good discussion of the vexed issue of anonymous speech rights). If the Voltage motion is not just about privacy rights, but speech rights, the fundamental question is which test will be used to decide when the right to anonymous speech can be over-ridden?

The continuum of options stretches from the weak ‘good faith’ standard adopted in the BMG and other copyright cases versus stronger standards in expressive rights cases that require those pressing a claim to demonstrate they possess evidence that is of a high enough standard that they just might win. In other words, when property rights trump speech rights, there better be good policy reasons and strong evidence for doing so.

CIPPIC’s stance reflects the increasing awareness that copyright claims have enormous implications for freedom of expression. That might not be of interest to TekSavvy, but it is a public interest of the highest order. It is also why CIPPIC needs to be in the room.  

CIPPIC’s second concern is to raise questions about whether the courts are being used illegimately as part of copyright trolls’ business model, a model that depends on people, when faced with threat of litigation, making the rational choice to fold simply be settling rather than going through a costly court case. That Voltage went straight to a motion for disclosure versus taking the time to send cease and desist letters throws such concerns into sharp relief.

CIPPIC’s role, thus, is specifically not to intervene on behalf of any of the Jane or John Does that might stand forward in the Voltage motion or TekSavvy because the interests of each of these groups are not one and the same. ISPs must take a stand for themselves. And within a mountain of factors making it unlikely that the hundreds, if not thousands of Jane and John Does will be able to effectively participate, as Howard Knopf states, CIPPIC’s job is to suggest how the law should be applied, what tests should be used when property and speech rights clash, and to uphold the public interest.

TekSavvy, the Federal Court Wants (Needs) You

Towards the end of yesterday’s hearings, Judge Mandamin indicated that hearing a motion from only one side is risky. Two possible interpretations seem to flow from this: One, CIPPIC could play a more adversarial role, and perhaps it will. Or two, TekSavvy needs to step up to the plate more forcefully than it has.   

I think the judge had the latter option in mind, but it is likely that only he and others in the room will ever know for sure. Two things seem to support this interpretation. First, Mandamin was clear that the Copyright Modernization Act, which just came into effect last November, is new and untested, meaning it’s ripe for interpretation and essential to get things right. TekSavvy has an opportunity to help define the new law and should use it. This is a job for those on the front line, not CIPPIC or a rag-tag group of Jane and John Does who may or may not show up when needed.

Judge Mandamin also made it clear that there were difficult technical issues that had to be dealt with and that the court needs to be as informed as possible. TekSavvy is in a better position than any to test the quality of the technical evidence, and for this reason, too, it should go beyond its current stance to directly oppose the motion.

Not a Fantasy

In the end, it is not that TekSavvy is doing nothing. As I argue above, and as David Ellis shows, it has done much, especially relative to what other ISPs have done. For that, we should stand in support of Marc Gaudrault, rather than casting barbs from the sideline.

That said, however, there is scope to do more. My desire to see more does not stem from seeing TekSavvy as falling short of some other-worldly standard of privacy or anything else, but concrete possibilities within currently existing laws, as Telus and Shaw (and to a lesser extent Rogers and Bell) showed in the BMG case and, as I suggested, in my last post, by best practices adopted by Sonic.Net and Twitter in the U.S. and ISPs in Sweden.

They  have taken an active and assertive role in directly opposing motions by copyright claimants and/or the state to disclose their subscribers’ account-related data. In the case of Sonic.Net as well as the Swedish ISPs, they  embraced policies that minimize the collection, retention and disclosure of subscriber information, thereby making it harder to turn-over subscriber information to copyright trolls, and anyone else, because they simply do not have it.

Yes, as someone I respect very much told me, I should be careful what I wish for, because if this mini-campaign for minimalist data collection, retention and disclosure policies gains legs, it’s possible the Harper Government would step in to mandate a minimum data retention law, likely in the range of six months.

My response is two-fold. First, we’ll deal with it if it happens. It’s not possible to be shadow-boxing with ‘what-ifs’. Should ISPs and other internet companies adopt this pet-project of mine, and face such a reaction, as some smart minds contemplate, then let us resume the battle royale that such a move could trigger, similar to the public outcry to the government’s last lawful access bill (Bill C-30).

Second, if expressive rights are tied to concerns about control over our own personal information, then perhaps it would be possible to challenge any attempt to legislate a data retention requirement on grounds that such a measure is excessively broad and an affront to speech rights? A more tailored response seems to have been grasped in the new Copyright Modernization Act where the need to retain subscriber data for six months only kicks in after an ISP receives notice of IP addresses that have been linked to infringing behavior. Data retention seems to be a bit of a blackhole when it comes to the interests of property and the state in Canada, and the sooner we shed some light on it, the better. 

Last Words

David Ellis, J.F. Mezei, and others are right that TekSavvy has done more than most and won a few victories along the way. With all that TekSavvy has done over the years, it would be churlish to see it as selling out. 

However, there is more that it can and should do. At this early stage in the shaping of the new copyright law, carving out an even greater role for itself could fundamentally shape the legal landscape for the internet and digital media for years to come.

And it is for all these reasons that I hope it will rise to the occasion, while being mindful that it has done much already and itself not privy to an unlimited stash of cash. Perhaps this is grasping at straws, but how about a John and Jane Doe and TekSavvy Copyright Troll busting fund?

If that’s an option to be pondered, I’m in for $190 to start (1/1000 of what TekSavvy has on the table so far).

Voltage’s Teksavvy Subscriber Shakedown: What’s a Good ISP to do?

Tomorrow will be a big day in a federal court in Toronto. At 11am, the court will hear a motion by Voltage Pictures to have Canadian indy-ISP and darling of the open internet community, TekSavvy, disclose the subscriber names and contact addresses associated with a list of 2000 IP addresses that Voltage alleges have been used to upload and share its films and tv programs in violation of copyright law.

At the end of the day we may know whether Voltage has prevailed and TekSavvy forced to hand-over the subscriber account information linked to those 2000 IP addresses. But while we wait, there is another question that I want to address in this post, and that is whether TekSavvy has done as much as it should to oppose Voltage’s motion?

As TekSavvy’s CEO Marc Gaudrault stated in DSL Reports last December when the case first erupted into public view, “we will not be making a case against the merit of what they are alleging. That’s for those affected and others to do if they wish to.”

That refusal to take a stand, to put it mildly, has displeased many of its subscribers. It has also unleashed a roiling discussion thread on DSL Report as well as the blogosphere. Respected copyright lawyer, Howard Knopf (here, here and here) and Jason Koblovsky (here & here), one of the co-founders of the Canadian Gamers Organization, have been highly critical of TekSavvy, arguing that it should be doing more to push back against Voltage’s shake-down of the ISP.

Drawing on his experience as legal counsel to CIPPIC in a close parallel to the motion now in front of us — the BMG case in 2005 — Knopf argues that TekSavvy should take the lead in opposing Voltage’s motion for at least three reasons:

  1. First, since it is the only entity that can resolve the link between IP addresses and subscriber identities, it is in the best place to challenge the technical evidence that Voltage and its forensics contractor, Canipre, have put forward;
  2. Second, in the BMG case, Telus and Shaw actively stood in opposition to the record labels’ bid to obtain subscribers’ identities on just this ground and TekSavvy should do no less in the present case, especially given that it holds itself out as being more attuned to its subscribers’ interests than its corporate cousins – a point that Koblovsky also relies on heavily;
  3. Third, it is too much to ask of CIPPIC, an organization with a skeletal staff and limited resources, to take the lead in the case.

The criticism of TekSavvy has led to a lot of soul-searching, mostly because, to most observers, the indy-ISP has been on the side of angels. The little-ISP-that-could, for instance, led the charge against the CRTC’s hated UBB decision in 2011, has intervened time and again in a myriad of regulatory decisions in which the fate of indy-ISPs has been on the line, held itself up as a plucky alternative to the incumbents with more affordable services, bigger caps or none at all, and has been a patron of Open Media, probably the most successful group this country has ever seen in terms of opening up arcane telecom, media and internet policy issues to a much bigger audience.

So, not surprisingly, others have come to TekSavvy’s defense. Most notably, in addition to denouncing Voltage’s mass copyright litigation (here and here), the other day David Ellis chastised TekSavvy’s critics. As Ellis sees it, TekSavvy has being working hard on behalf of its subscribers for two months. Moreover, TekSavvy quickly joined CIPPIC to ask the court to postpone the matter to give the ISP more time to notify its subscribers, for the court to consider CIPPIC’s request to join the proceedings and to give Voltage and its hired-gun, Canipre, more time to clean up their data. Ellis also suggests that the distance between pushing for a delay and outright opposition might not be that far, and we could still see it take on a more active oppositional role yet.

He also argues that TekSavvy’s reticence to take a stance is probably due to concerns that doing so could jeopardize its claims to being a neutral, common-carrier. In this view, by staying neutral, TekSavvy avails itself of ‘safe-harbour’ provisions that get ISPs off the hook in terms of their own liability in copyright infringement cases.

While I agree with Ellis that TekSavvy could yet change its stance, and that it has done much to buy its subscribers time to arrange their own defense, I do not think it has done enough. I also think worries that actively opposing Voltage’s motion could jeopardize its ‘safe-harbour’ defense are misguided. As a common carrier, ISPs already have limited liability for what their subscribers do, and what TekSavvy does in the courtroom will have no effect on that.

I agree with Knopf that TekSavvy should be taking the lead in opposition to Voltage’s shake-down because it is in the best place to do so from a technical point of view. That there may be problems with the technical data that Voltage is presenting is evident in the fact the company cut their initial list of 4000 IP addresses down to 2000 at the last minute – a good sign that things are not quite in order. Given the weight the BMG case put on the quality of the data in determining whether privacy would be trumped by other pressing concerns, this is essential (see para 21).

Second, ISPs are common carriers and this means their liability for what subscribers say and do is very limited, both by law and by tradition. The basics of what that means is set out in the Telecommunications Act of 1993 (see sections 27-29 and 36). Common carrier principles are also carried over into the new Copyright Modernization Act, as the following passage indicates:

A person who, in providing services related to the operation of the Internet or another digital network, provides any means for the telecommunication or the reproduction of a work or other subject-matter through the Internet or that other network does not, solely by reason of providing those means, infringe copyright in that work or other subject-matter (sec. 31(1)).

Incumbent ISPs have always reserved the right to aid copyright claimants (read your Terms of Service agreement) and, indeed in 2011 Telus said that it was sending out 75,000 notices a month of alleged copyright infringement to its subscribers. The new Copyright Modernization Act has parlayed this informal arrangement into a notice-and-notice regime that now requires ISPs to do the same thing as a matter of law, and to retain and disclose subscribers’ information for a period of six months after receiving notice of copyright infringement.

There is nothing in the new act or the old legislation, however, that prevents or even discourages ISPs from taking a stance against a motion for disclosure. Again, as Knopf observes, when mass copyright litigation first hit Canada in the BMG case, Shaw and Telus stepped up to oppose BMG and the rest of the recorded music industry arrayed against them. Moreover, while Bell and Rogers were less committal in the opposition, ultimately they did line up foursquare with Shaw and Telus behind the view, as the court stated, that ISPs should step forward to “protect[] the privacy of their customers whom they were obliged to protect by virtue of the Personal Information Protection and Electronic Documents Act (2000) (para 13). They won.

TekSavvy should do the same. Going out on a limb a bit, at least one seasoned lawyer that I have spoken with suggest that the case could be fought and won easily, for five figures, i.e. under $100k.

Beyond the BMG case we can also look further afield to the United States at a recent example of what a real stance opposing a motion of disclosure looks like. Thus, when faced with a request from the Department of Justice to hand-over account information for three of its subscribers, without telling them, as part of the DOJ’s investigation of Wikileaks, Twitter refused. The company obtained a court order allowing it to disclose the request to the users in question. It also put them in touch with legal counsel at the Electronic Frontier Foundation.

Finally, Twitter fought the request tooth and nail, all the way to appeal, but lost because, according to the ruling, the social media company’s business model is based on the unbridled collection of user data for advertising purposes in return for free access to the service. The upshot of that, in turn, is that users have no reasonable expectation of privacy and thus Twitter had to hand over subscribers’ account information to the state.

Whether Twitter won or lost is not the key point; the fact that it stood up to the plate, and fought to the bitter end in support of its subscribers and a principle – privacy – is. Moreover, while a loser in the court of law, in the court of public opinion, it won: Twitter’s chief lawyer, Alex MacGillivray, was named by The Guardian as one of its top twenty “champions of the open internet” last April.  The Electronic Frontier Foundation offered its own honorifics.

The last point that I want to make is that TekSavvy has another option at its disposal: minimizing the collection, retention and disclosure of subscriber data as a matter of company policy. Apparently there has already been some discussion of this, with the ISP at one point in time before the Voltage motion hit the fan thinking about increasing the length of time that it keeps data logs from three months to six. That is now off. And that is certainly a good thing.

There are many reasons that ISPs need to keep data logs, not least of which are billing and network management. However, there are also ways of meeting these needs that limit the data kept to just these narrow purposes and which otherwise minimize how much data is collected, how long it is retained, and when it is disclosed. Billing data, for instance, can be kept separate from traffic data, with the former retained, and the latter tossed.

There are two excellent examples along these lines that I’ll close this post with. The first is Sonic.net, a San Francisco Bay area ISP with 45,000 subscribers. It keeps subscriber data logs for only two weeks and has been the recipient of copious amounts of praise and a four-star rating by the Electronic Frontier Foundation in the latter’s annual “Whose got your back” scorecard because of this practice. TekSavvy could take some lessons from Sonic.net.

Lastly, in 2009, several Swedish ISPs, including one of the top 3 – Tele2 – began erasing “traffic data” in order to protect their subscribers privacy. They did so in response to the Sweden’s own new copyright law, IPRED, and in order to avoid precisely the kind of predicament that TekSavvy now finds itself in.

In my view, such a minimalist data collection, retention and disclosure policy is part and parcel of what a full-throated defense of principles and its subscribers would look like. The point is not to turn TekSavvy into a scofflaw, or a ghetto for copyright infringement abuse. The case of Sonic.net, Tele2, Twitter, and others demonstrate well that strong privacy and subscriber protections are not tantamount to such things, and indeed are good business and good for people’s rights.

Minimizing the collection, retention and disclosure of subscriber information embodies practices and values that apply across domains. Today it is copyright; tomorrow, lawful access and the son-of-Bill C30 (lawful access). Such values and practices will serve us well in that context, too.

We are in the midst of many events and choices that will be made that will set down the firmament in which the internet establishes deep roots. In my mind, we need to realize that these decisions and events will determine whether we can develop an internet fit for democracy, or whether we will see trade-offs all down the line to the point that an open internet and democracy are just a dream. Good night.

* Note: revised January 14th to acknowledge that Bell and Rogers were far more tepid in their stance than Telus or Shaw in the BMG case, while Quebecor (Videotron) actively sided with the record labels.

Categories: Internet Tags: , , , ,

Journos as Megaphones: The Globe and Mail Covers Bell

Once again, yet another story in the Globe and Mail yesterday was out peddling a tale of doom and gloom about the state of conventional commercial television broadcasters in Canada. This time, the story came hot on the heels of a Supreme Court of Canada ruling Thursday that threw cold water on the idea that cable, satellite and IPTV services should pay broadcast tv companies — Bell (CTV), Shaw (Global), Rogers (CityTV), Quebecor (TVA), the CBC, and a smattering of smaller independents — to deliver their signals to the tv screens of Canadians across the country.

It was a small victory for the non-vertically integrated entities that have long been in the business of television distribution, such as Cogeco, Eastlink and other cable companies, as well as several telcos across the country that are trying to expand their IPTV services in order to break into this highly concentrated field: Telus, MTS Allstream, Sasktel. Even Rogers, given its relatively small place in the conventional tv universe, opposed the fee-for-carriage model being touted by Bell, Shaw and a few others.

However, rather than entertaining the idea that the Supreme Court’s decision might be a good thing because it means that there will be no new ‘fee-for-carriage’ charges on already expensive cable and satellite bills (i.e. a “TV Tax”), or that it could foster more competition in the anemic tv distribution biz, where the big four — Shaw, Bell, Rogers and Quebecor — control roughly 84 percent of industry revenues, the Globe and Mail article hands the narrative over to the loser in the case: Bell.

Instead of framing the victory as potentially a small victory for consumers, or examining the Supreme Court decision itself, the article rips and reads from Bell’s talking points. Of the 813 words in the article, 144 are direct quotes from Bell; the Supreme Court decision gets 37.

Indeed, Bell sets the narrative frame for the story from the get-go, not just in terms of the sheer volume of ink spilt transcribing and transmitting its view to readers, but by the fact that it is the first to be quoted, and extensively so, with paragraphs five and six completely handed over to the company’s talking points. Here’s Bell setting the stage in paragraph five, lamenting why the decision is bad, not for itself, but Canadians:

“TV viewers across the country would have benefited from long-term stability for their local television stations, which currently rely on an advertising market that has seen permanent structural change, and is no longer able to fund such a model on its own.”

A few paragraphs later, Bell locks down the frame that sticks for the rest of the story: “the ad market for local television is in permanent decline.”

But hold the phone! Are any of these claims true? Umm, there’s room for interpretation, although not in the Globe and Mail’s piece, but the answer is basically (i) mixed if we look just at broadcast television advertising revenue, (ii) no if we look at total revenues for broadcast tv and (iii) an even bigger NO if we look at advertising revenues for all tv services.

As the CRTC’s most recent Communication Monitoring Report shows, advertising revenues for conventional tv for the past four years have been basically flat, hovering between $2,320 – $2,350 million. Advertising revenues went to hell in a hand-basket in 2009, but have risen by nearly $220 million in the two years since (p. 73).

If we look at all revenues for conventional television, the picture is even clearer. While revenues plunged in 2009 at the height of the economic downturn, other than that they basically stayed flat between 2008 and 2010.

By 2011, revenues for conventional tv were up $86.3 million over the previous year and over $100 million more than they had been at the outset of the global economic downturn in 2008. They were roughly $315 million more than five years ago, i.e. $3,491 in 2011 versus $3,176.2 million in 2006 (all revenue figures can be seen here). Not bad, really, and hardly the picture of distress portrayed by Bell.

Every media economist knows that the fortunes of advertising supported media hinges on the state of the general economy. In light of that, the fact that conventional tv has weathered the economic downturn, and done so whilst so much else in its environment is in a heightened state of flux, is not a catastrophe, as Bell and the Globe and Mail would like us to believe, but quite remarkable.

Perhaps if we dig deeper to look at advertising revenues across all television services as a whole, we will see the deep structural shift that Bell claims is happening, and which the Globe and Mail simply transcribes and transmits, as dollars are forever siphoned away from television in favour of the internet?  Um, no.

The big picture for advertising revenues across all television services (conventional and pay/specialty) is even more unequivocal: television advertising revenues have risen steadily and substantially over past twelve years, as the following figure shows:

TV Advertising

Source: Interactive Advertising Bureau (2012). 2011 Actual + 2012 Estimated Canadian Online Advertising Revenue Survey; Interactive Advertising Bureau (2009), 2008 Actual + 2009 Estimated Canadian Online Advertising Revenue Survey.

While there is absolutely no doubt that all of the players are scrambling to come to terms with new realities and still moving grounds, it is precisely because conventional television is not in crisis that the CRTC decided earlier this year to phase out the much hated Local Programming Improvement Fund (LPIF) that it had put into place in 2008 when things really did look rocky.

Journalists do a disservice to their readers by packing stories and what purports to be analysis with talking points from Bell rather than doing the leg work needed to access readily available data that paints a fuller and, by and large, very different picture.

Of course, there is tons of room to argue over the evidence but the flat portrait of conventional tv in decline painted at the Globe and Mail obscures the terrain of debate. If this was just an isolated instance, then perhaps we could just move along, nothing to see here. My sense, however, is that it is not.

To be more specific, we saw exactly this kind of coverage by the Globe and Mail when the CRTC quashed Bell’s bid to acquire Astral Media (see here and here, for example). Bell was essentially given free reign to vent, to tell us why the CRTC decision was wrong, how the CRTC under new chair J.P. Blais had gone activist, how Astral’s market cap had taken an undeserved beating as a result, what George Cope and Kevin Krull planned to do about things, and finally, when Bell teed up a second bid for Astral its move was pitched as somehow being routine, just another kick-at-the-can, when it is anything but.

There’s two final points to be said on these matters, at least for now: first, the task of journalists is not to act as conveyer belts for corporate PR and a monochromatic view of the world. Readers deserve better.

Second, and in this particular context, the fact that the owners of the Globe and Mail, the Thomson family, have a significant equity stake in Bell, and Bell holds a 15% stake in the Globe and Mail, raises questions about the ability of journalists to cover this beat without serving on bended knees. There is no proof that Globe and Mail journalists are taking orders from headquarters on this stuff, and if they were, the chance that we could know about it are about zero since we have no access to the internal workings of the newsroom and the day-to-day routines of journalists.

The fact that researchers can seldom gain access to the internal working of media organizations is why I do not generally like to try to connect my analysis of the structure of the media industries with the quality of the content they provide, whether good, bad or otherwise. One thing that this means, however, is that we have to trust journalists and for that to happen they have to give us good reason to do so.

People who own stuff like to tell others what to do and certainly have the potential to do so within the media, so it seems to me that journalists must walk the extra mile to demonstrate their autonomy rather than serving up Bell’s view of the world in one case after another in which the company finds itself on the losing end of the stick. Two months ago, the context was Bell Astral, two days ago the Supreme Court. Tales of doom and gloom advance a policy agenda and in this case, that of Bell and a few others, and that is why it is so important not to parrot what they have to say.

With Bell Astral Round Two likely to be teed up in the New Year, we deserve better journalistic coverage of the media industries in this country and I sure hope we get it. The last thing we need is yet another rooftop from which the most powerful and well-endowed media voices in the land get to shout about their view of the world and how things oughta be.

Movies and Money, 2011: Bluster and Blockbusters, the Sequel

The Motion Picture Association (MPA), the lobbying arm of the major Hollywood studios, was out again last week playing whack-a-mole with anyone audacious enough to entertain heretical ideas.

This time it was a three-page abstract (yes, the abstract) of a paper, Piracy and Movie Revenues: Evidence from Megaupload, by German and Danish scholars Christian Peukert and Jorg Claussen that seemed to get on the MPA’s nerves. The abstract had sat in relative obscurity on the SSRN research website for the past month-and-a-half until Torrent Freak trotted it out last week with a trumped up title that the MPA certainly did not want to hear: “MegaUpload Shutdown Hurt Box Office”.

The title played fast and loose with the thrust of Peukert and Claussen’s paper — most films probably see a small but insignificant negative effect on theatre attendance when sites such as MegaUpload are taken down — but it was not the journos and bloggers that the MPA went after, but the paper’s original authors. The thought that sites like MegaUpload might actually be good for the movie business by helping to put more bums in theatre seats must have seemed to be just too heretical to let stand, especially when coming from academics.

As Peukert and Claussen explain, file sharing may be good for a lot of movies released in theatres every year, but by no means all, because people sharing files online can

. . . spread information about a good from consumers with zero or low willingness to pay to users with high willingness to pay. The information-spreading effect of illegal downloads seems to be especially important for movies with smaller audiences.

The upshot is that, for most movies, putting file-sharing sites (Megaupload, Isohunt, Pirate Bay) out of business could reduce the size of the theatre-going audience — the exact opposite outcome intended by those who believe that strong copyright laws and enforcement are essential to remedying whatever might ail the traditional media. Whereas Peukert and Claussen deliver this conclusion in careful and measured language, the headline pinned on the article describing their work by Torrent Freak, “MegaUpload Shutdown Hurt Box Office”, definitely did not.

The thrust of the Torrent Freak piece played well to the open internet, copyright minimalist crowd, confirming that the incumbent Hollywood movie moguls must have their heads stuck in the sand, given their steadfast and stupid resistance to the new way of doing things in the ‘new internet economy’. Technically, the headline was true. The problem, however, is that this particular truth hides an even bigger one, at least for the MPA and its members: the slight impact seen for most films does not hold when it comes to the MPA members’ blockbuster films, you know, the big budget spectacles that open on 500 screens across North America all at once (before moving in carefully staged sequences across the planet).

This is a pretty big exception and basically covers the 140 – 150 films produced by the Hollywood studios each year and which are the real bread and butter of the MPA’s corporate rank and file: Time Warner, News Corp, Disney, Sony, Paramount (Viacom) and Universal (Comcast NBC). For these films and the majors that finance and produce them, Megaupload and its ilk are bad news indeed, and little in Peukert and Claussen’s study challenges this idea.

To suggest that this is not a main part, if not the main part of the story, is misleading.  As far as I can tell, however, this is not the fault of the paper’s authors but how their work was pumped up into something that it wasn’t by the blogerati and real journos who seem ever more prone to trolling the blogosphere and twitter for ideas and inspiration.

Not surprisingly, the MPA, had a radically different take on things, given that its main concern is not with most of the six hundred or so films released in theatres around the world every year, but the 140 – 150 films produced by its members which account for most of the revenues in the movie business worldwide.  As the MPA interpreted Peukert and Claussen’s paper, correctly in my view, the evidence seems to suggest that blockbuster films have bigger theater audiences when they do not compete with Megaupload and other such sites. This is probably because the massive promotional budgets associated with the blockbuster does not need file-sharing to amplify and augment word-of-mouth to build buzz around a film in the way that smaller films, of a more obscure vintage produced and distributed outside the Hollywood system, do.

However, to stop here would be to give the MPA too much credit. The MPAA does little more than point to the obvious. More importantly, instead of focusing on how scholarly findings have been twisted and trumped up by bloggers and journos, the MPAA takes a run at Peukert and Claussen’s methodology, as if it is the scholars rather than others that are out causing mischief.

The assault on methodology is wide of the mark. Designed more to dirty the waters and distract attention, it is an exercise in intellectual dishonesty. While trying to cast doubt on the paper’s methodology as if such things undermine the study’s conclusions, the MPAA offers zero evidence to buttress its criticisms or its own view that piracy and file-sharing are bad and the copyright maximalist position obvious and good.

Six Decades of Cassandra Calls and Falling Skies

These tactics are not new but part of the DNA of the film industry in the United States. Hollywood has been trotting out tales of impending doom since the Paramount Decision in 1948 by the Supreme Court that forced the major studios to divest themselves of the theatres they owned in order to foster independent theatres that would hopefully be more responsive to audiences because less obligated to show the slate of films foisted upon them by their studio masters.

The story of impending doom continued in the 1950s and 1960s when tv became a fixture in North American homes. To be sure, film theatre attendance did fall for nearly two decades during this time, but was this because people abandoned theatres for tv at home, or the result of a combination of factors: the move to suburbia, widespread adoption of cars as well as the embrace of television? I think it is the latter that is the case, as do others (see here and here, for example).

The more important point, however, is that by the 1970s television became the film industry’s pot of gold at the end of the rainbow, moving unequivocally from threat to one of the most lucrative new media markets the movie business has ever known. The same lesson came to apply to the VCR, DVD and every other personal video recording device thereafter, yet again, not before the MPAA and its members demonized each new technology as an existential threat to the movie business and a particular American icon.

Most famously, the MPAA’s then chair, Jack Valenti likened the VCR to the Boston strangler, as much a threat to the film industry as darkness is dangerous to damsels in distress. And yet again, a mixture of new, ever more personalized media technologies, along with the increased individualization of pleasure and social life in general, led the VCR, DVD, PVR, and so forth to become not just important new lines of revenue for the film industry but the most significant sources of growth (see below).

Movies and Money, 2011

If there was ever a case that an old medium would be decimated by the new, you might think that a medium born in the 1890s would be a star candidate for extinction. However, as one of my mentors and teachers Janet Wasko once told me and my fellow classmates, each new audio-visual medium has typically opened up a new market for the major Hollywood studios and other film distributors.

This was a lesson she had drawn from her research in the 1970s and 1980s and which she told us about in the early 1990s.  But perhaps everything has changed since then because of digitization and the rise of the Internet?

Not really.  A couple of things illustrate the point.

First, let’s take a look at the MPA’s most recent report on the subject. According to the MPA, worldwide box office revenues were at an all time high in 2011 at $32.6 billion (USD) – up from $31.8 billion a year earlier. The North American box office saw a very modest decline, but has generally stayed quite steady for the last few years, which also means that it was the global box office that helped to lift the tide. The following figure shows the trend.

Figure 1: Domestic and Worldwide Theatre Box Office Revenues, 1998 – 2011 (millions USD)

Dom & Int'l Film Revenues, 2011

Sources: Motion Picture Association (2011). Theatrical Market Statistics.

The fact that box office revenues have climbed significantly from $26.3 billion to $32.6 billion between 2007 and 2011 amidst the global financial crisis and ensuing economic downturn is also impressive, basically showing the resilience of the movie business in the face of economic hard times.

And this is less than half the picture, actually, as we can see as soon as we cast our net a little wider to consider all revenues sources across the ‘total film industry’, including pay-per view tv services, cable and satellite channels, rapidly declining video/DVD rentals and fast rising over-the-top (OTT) subscription services (Lovefilm, Netflix, etc.) and digital downloads (Apple, Amazon, etc.). As soon as we bring these areas into view, any sense of doom and gloom in tinsel town should dissipate.

Indeed, the movie business is doing even better than the box office numbers suggest, with total revenues rising sharply on a worldwide basis from $46.5 billion just before the turn-of-the-21st century to $83.5 billion in 2011. Figure 2 below shows the trend.

Figure 2:  Total Worldwide Film Industry Revenues, 1998 – 2011 (US$ Millions)

Total Film Revenues, 2011

Sources: Motion Picture Association (2011). Theatrical Market Statistics; PWC, 2012, Global Entertainment and Media Outlook, 2012 – 2016 (plus previous years; e.g. 2009; 2003).

Again, several things of note stand out from Figure 2. First, like the box office, revenues for the total film industry continued to rise from $80.3 billion in 2007 to $83.4 billion in 2011 despite the economic malaise affecting much of Europe and North America since the global financial crisis of 2007-8. Many areas of the media industry are very heavily dependent on the state of the macro economy but this seems less true of the movie business.

Second, while total revenues for the movie industry continue to grow, the number of films produced by the Hollywood majors per year continues its decade-long decline to the point where in 2010 and 2011, MPA members produced 141 films versus around 200 per year in the late-1990s and early-2000s. This is an important development and reflects the fact that the majors are trying to cut through the clutter of a crowded media economy by relying on a smaller number of spectacular blockbusters with massive budgets backed by equally massive promotional campaigns.

The average budget of the top 10 blockbuster Hollywood film nearly doubled between 2000 and 2010, rising from $109.2 million in the former year to $197.2 million last year. The primary objective, of course, being to keep the three scarce resources of the media economy — time, money and attention — fixed on the MPA members’ own wares.

Table 1 below shows the following trends: a declining number of blockbusters produced by MPA members, rising number of independent produced films over the past decade, and lastly a greater number of films overall, but with a relatively stable output of about 550 to 600 films per year for the past half-decade.

Table 1: Number of Films Released in Theatres, MPA vs. Non-MPA Sources, 1998 – 2011

1998 2000 2002 2004 2006 2008 2009 2010 2011
Total # Films Released 509 478 475 489 594 634 555 569 610
MPAA Total 235 197 205 180 204 168 158 141 141
Non-MPAA 274 281 270 309 390 466 397 428 469

Source: MPA (2012). Theatrical Market Statistics.

There is, however, one other thing that stands out from Figure 2 above that puts a bit of a fly-in-the-ointment in the story that I am telling of consistently rising total revenues: namely, that while increased revenues from television and various video services have added immensely to the movie biz’s total revenues over the past thirteen or so years, such revenues appear to have peaked in 2004 ($54.9 billion) and have fallen significantly since to about $50.9 billion.

Why is this? I’m not exactly sure. The days of torrential growth in television seen during the 1990s and early-2000s as countries the world over picked up the tv habit, notably in the fast growing economies of China, Brazil, Indonesia, India, Brazil and Russia, might be slowing down, perhaps. However, over and against this view, the size of the total tv market worldwide has continued, according to PriceWaterhouseCooper’s Global Entertainment and Media Entertainment Outlook, 2012 – 2016, to grow very significantly, rising from roughly $280 billion in 2004 to over $400 billion last year. I would love to hear why revenues in this area have fallen for the last several years.

Concluding Comments

The next time you hear about the movie industry (or any other media sector for that matter) falling on hard time because of digitization, the Internet, piracy, and so forth, think about these trends. And please repeat after me: the movie industry is not in crisis; for the most part it is flourishing.

These are important observations because it is the same vested interests that want us to think that the sky is falling which use these mistaken impressions to:

  1. push for changes to copyright laws and a clamp down on Internet Service Providers in ways that wouldn’t otherwise have a hope in hell of succeeding;
  2. exert leverage over politicians and policy-makers, who have often accepted the bulk of such arguments while crafting the raft of new and reformed copyright legislation that has been installed around the world in the past few years. As a recent example shows, even the Republican Congressional staff’s think tank in the U.S., the Republican Study Committee, felt compelled to yank a policy discussion paper on copyright reform authored by one of its staff from its website just hours after releasing it and after the MPAA and RIAA are said to have “went ballistic“;
  3. play cities, states, provinces and countries around the world off of one another for subsidies and favourable labour conditions;
  4. and in labour bargaining with unions representing film and television workers, with the latter easily made to appear outlandish in their demands for good wages and working conditions in light of the steady drumbeat of public relations saying that the movie industry stands on the edge of the abyss.

Media and Internet Concentration in Canada, 1984 – 2011

As my last post explained, the media economy in Canada has grown immensely and become far more complex in the past twenty-five years with the rise of the Internet and digital media. In this post, I ask whether the media have become more or less concentrated amidst all these changes?

While opinions are rife on the issue, as McMaster University professor Philip Savage (2008) observes, the debate over media concentration in Canada “largely occurs in a vacuum, lacking evidence to ground arguments or potential policy creation either way” (p. 295).

The need for good evidence on the question has been obvious over the past year in the context of Bell Canada’s bid to buy Astral Media, the ninth largest media company in Canada. Indeed, the CRTC’s decision to kill the deal in late October turned in a big way, although not entirely by any stretch of the imagination, on the evidence about media concentration.

The same question will be front-and-centre in Bell Astral Round Two. While nobody knows what version 2.0 of the deal looks like outside of the two companies’ inner sanctum, and the CRTC staff currently vetting it before it is opened for public interventions (probably in the new year), the issue of concentration will undoubtedly loom large in whatever discussions, and regulatory actions, do occur.

That said, however, we must make no mistake about it, studying media and internet concentration is not about Bell or Astral, or any specific transaction. In fact, the issue in the Bell Astral case is not if Bell is too big but whether telecom, media and internet markets in Canada are already too concentrated as a whole? How do we know one way or another? This post helps to address these questions.

Competing Views on Media Ownership and Concentration

Grappling with these issues is not just about remedying the ‘missing evidence’ problem, but thinking clearly about how the issues are framed.

Many critics point to media concentration as steadily going from bad to worse, but with little to no evidence to back up such claims. Perhaps the best known example of this is Ben Bagdikian, who claims that the number of media firms in the U.S. that account for the majority of revenues plunged from 50 in 1984 to just five by the mid-2000s. Similar views also exist in Canada, where critics decry what they see as the inexorable trend towards greater media concentration and its debilitating effects on “democracy’s oxygen”, for instance, or vilify the media moguls behind such trends who have, in these critic’s words, created “Canada’s most dangerous media company”.

A second group of scholars set out to debunk the critics by quantitatively analyzing reams of media content only to find the evidence about how changes in media ownership and market structure effect content to be mostly “mixed and inconclusive” (Soderlund, et. al al. 2005). The problem with this conclusion, however, is that it proceeds as if media concentration’s ‘impact on content’ is the only concern, or as if preserving the existing status quo might not be a significant problem in its own right (Gitlin, 1978). Undeterred, this line of scholarship trundles on so that, half a decade later, similar studies by many of the same authors, Cross-Media Ownership and Democratic Practice in Canada: Content-Sharing and the Impact of New Media, reach pretty much the same conclusions (Soderlund, Brin, Miljan & Hildebrandt, 2011).

A third school of thought mocks concern with media concentration altogether. According to this school, how could anyone believe that the media are still concentrated when there are thousands of news sources, social networking sites galore, pro-am journalists, user-created content and a cacophony of blogs at our finger tips, 700 television channels licensed by the CRTC, ninety-four newspapers publishing daily and smartphones in every pocket? Ben Compaine (2005), a media economist at MIT, has a one-word retort for those who think that concentration still matters amidst this sea of plenty: internet!

Those in this camp also argue that focusing on concentration when traditional media face the perilous onslaught of global digital media giants such as Google, Amazon, Netflix, Facebook, and so on is akin to rearranging the deck chairs on the Titanic – foolhardy and doomed to fail (Thierer & Eskelen, 2008; Dornan, 2012). Journalistic accounts often share this view, routinely invoking, in mantra-like fashion, the idea that media are more competitive than ever. Like their acdemic counterparts, such accounts offer little to no evidence to support such claims, other than pointing to the same roster of foreign digital media goliaths as if examples equals evidence. It does not.

While some might find it hard to fathom, there’s a fourth school of thought, and one that I largely subscribe to, that accepts that fundamental changes have occurred, but rejects claims that this renders concern with media consolidation obsolete. For all those who guffaw at charges of media concentration, it is easy to point, for example, to the fact that only about a third of the 94 daily newspapers said to exist are actually still publishing original content on a daily basis. Of the 700 television channels listed on the CRTC’s books, just over 200 actually filed a financial return last year. And half of those tv channels belong to just four companies — Bell (33), Shaw (46), Rogers (11) and QMI (12). Their share of the market, as we will see, is much higher yet. Keeping our eye on these facts also highlights, for example, how dominant incumbent players use price (usage-based billing) and bandwidth caps, for example and among other tactics, to protect their legacy television businesses (i.e. CTV, Global, CityTV, TVA), while hobbling rivals (Netflix) and limiting people’s choice as a result.

This school also suggests that core elements of the networked digital media – search engines (Google), Internet access (ISPs), music and book retailing (Apple and Amazon), social media (Facebook) and access devices (Apple, Google, Nokia, Samsung, RIM) – may actually be more prone to concentration because digitization magnifies economies of scale and network effects in some areas, while reducing barriers in others. If this is correct, then we may be witnessing the rise of a two-tiered digital media system, with many small niche players revolving around a few enormous “integrator firms” at the centre (Noam, 2009; Benkler, 2006; Wu, 2010).

The more that central elements of the networked digital media are concentrated, the easier it is to turn these nodal points — Facebook, Google, ISPs, Twitter, and so forth — into proxies that serve other interests in, for example, the preservation of dominant market power in ‘legacy’ media sectors (e.g. television and film), the copyright wars, efforts to block pornography, and in law enforcement and national security matters. In other words, the more concentrated such nodal points are, the more potential digital media giants have to:

  • set the terms for the distribution of income to musicians, newspapers and books (Google, Apple, Amazon);
  • turn market power into moral authority by regulating what content can be distributed via their ‘walled gardens’ (Apple),
  • set the terms of ownership and use of user created content and how it is sold in syndicated markets as well as to advertisers (Google and Facebook) (van Couvering, 2011; Fuchs, 2011);
  • and set defacto corporate policy norms governing the collection, retention and disclosure of personal information to commercial and government third parties.

Whilst we must adjust our analysis to new realities, it is also true that long-standing concerns have not disappeared either. To take just one case in point, consider the fact that during the 2011 election campaign, every single newspaper in Canada, except the Toronto Star, that editorially endorsed a candidatefor Prime Minister touted Harper – roughly three times his standing in opinion polls at the time and the results of the prior election. When 95 percent of editorial endorsements for PM across the nation stump for one man – Harper — something is amiss.

Ultimately, talk about media concentration is really a proxy for bigger conversations about consumer choice, freedom of expression as well as democracy. While such discussions must adapt to new realities, the advent of digital media does not mean that such conversations should fall silent. Politics, values and heated debates are endemic to the topic, and this is how things should be (Baker, 2007Noam, 2009; Peters, 1999).

Methodology

Discussions of media concentration will never turn on the numbers alone, and nor should they, but it is essential to be as clear as possible about the methods used to assess the issue. To begin, there is no naïve vantage point from which data about these issues can be innocently gathered and presented as if evidence is just out there laying in a state of nature, somewhere, waiting to be plucked like apples from a tree.

Data, in other words, does not serve as a one-to-one map of the reality it claims to describe. Nonetheless, there are good ways to make a good body of evidence and bad. An essential factor all down the line is the need for researchers to be open and reflexive about their methods and theoretical starting points.

A fuller discussion of the methodology that I use can be found here, here and here, but for now we can lay out the bare bones of the approach before turning to the analysis itself. I begin by selecting a dozen or so media sectors at the heart of the analysis: wired & wireless telecoms; cable, satellite & IPTV distributors; Internet access; broadcast tv; pay & subscription tv; radio; newspapers; magazines; search engines; social media sites; and online news services.

Data were collected for each of these sectors over a twenty-seven year period, 1984 – 2011, first at four-year intervals up until 2008 and annually since. For the DIYers among you, here’s a handy dandy list of sources.

Data for the revenues and market share for each ownership group in each of these sectors was then assembled. I then group each of the above sectors into three categories, assess the concentration level in each category, and then scaffold upward from there to examine the network media industries as a whole: (1) network infrastructure; (2) content: (3) online media.

I typically drop wired and wireless telecoms from the whole of what I call the network media industries because the size of these sectors means that they tend to overshadow everything else.

Lastly, I use two common tools — Concentration Ratios (CR) as well as the Herfindhahl – Hirschman Index (HHI) – to depict levels of competition and concentration over time. The CR method adds the shares of each firm in a market and makes judgments based on widely accepted standards, with four firms (CR4) having more than 50 percent market share and 8 firms (CR8) more than 75 percent considered to be indicators of highly levels of concentration.

The HHI method squares and sums the market share of each firm with more than a one percent share in each market to arrive at a total. If there are 100 firms, each with a 1% market share, then markets are highly competitive, while a monopoly prevails when one firm has 100% market share. The following thresholds are commonly used as guides:

HHI < 1000                                     Un-concentrated

HHI > 1000 but < 1,800             Moderately Concentrated

HHI > 1,800                                    Highly Concentrated

The Historical Record and Renewed Interest in Media Concentration in the 21st Century

There has always been, even if episodically, keen interest in media ownership and concentration in Canada and the world since the late-19th and early-20th centuries.

In 1910, for example, the Board of Railway Commissioners (BRC) broke up the three-way alliance between the two biggest telegraph companies — Canadian Pacific Telegraph Co. and the Great Northwestern Telegraph Co. (the latter an arm of the New York-based goliath, Western Union) – and the American-based Associated Press news wire service. Why?

In the face of much corporate bluster, the BRC did this because the two dominant telegraph companies were giving away the AP news service to the top newspaper in cities across Canada for free in order to bolster their stranglehold on the lucrative telegraph business. Allowing this to continue, stated the BRC matter-of-factly, would “put out of business every news-gathering agency that dared to enter the field of competition with them” (1910, p. 275).

Thus, in a conscious bid to use telecoms regulation to foster competition amongst newspapers, and to free up the flow of news on the wires, the BRC effectively dismantled the alliance. For upstarts such as Winnipeg-based Western Associated Press – which had initiated the case – it was a significant victory (Babe, 1990).

Media concentration issues arose episodically thereafter and came to a head again in the 1970s and beginning of the 1980s, when three inquiries were held: (1) the Special Senate Committee on Mass Media, The Uncertain Mirror (2 vols.)(Canada, 1970); (2) the Royal Commission on Corporate Concentration (1978); and (3) the Royal Commission on Newspapers (Canada, 1981).

Things lay dormant for more than two decades thereafter, but sprang to life again in the late-1990s and turn-of-the-21st century after a huge wave of consolidation thrust concerns about media concentration back into the spotlight. Three inquiries were held between 2003 and 2007 as a result: (1) the Standing Committee on Canadian Heritage, Our Cultural Sovereignty (2003); (2) the Standing Senate Committee on Transport and Communications, Final Report on the Canadian News Media(2006);[i] as well as (3) the Canadian Radio-Television and Telecommunications Commission’s Diversity of Voices inquiry in 2008.

Structural Transformation: Two (three?) Waves of Consolidation and the Rise of TMI Conglomerates

As I noted in my last post, for all sectors of the media economy in Canada, revenues grew immensely from $37.5 billion in 1984 to just under $70 billion last year (or from $12.1 billion to just under $34 billion when we exclude wiredline and wireless telecoms) (in inflation-adjusted “real dollars”). Between 1984 and 1996, new players meant more diversity in all sectors, except for newspapers as well as cable and satellite video distribution, where concentration climbed significantly.

Conventional as well as pay and subscription television channels were already expanding during this time. In terms of ownership, incumbents and a few newcomers – e.g. Allarcom and Netstar – cultivated the field, with their share of the market growing steadily in tandem with the number of services available (underlying data for these claims can be found here).

Concentration levels remained very high in wired line telecoms in the 1980s and early 1990s, while wireless was developed by two companies, Bell and Rogers. As had been the case in many countries, telecoms competition moved slowly from the ends of the network into services and then network infrastructure, with real competition emerging in the late-1990s before the trend was reversed and concentration levels again began to climb.

In the 1980s and early-1990s, consolidation took place mostly among players in single sectors. Conrad Black’s take-over of the Southam newspaper chain in 1996 symbolized the times. In broadcast television, amalgamation amongst local ownership groups created the large national companies that came to single-handedly own the leading commercial television networks – CTV, Global, TVA, CHUM, TQS – by the end of the 1990s.

While weighty in their own right, these amalgamations did not have a big impact across the media as a whole. There was still significant diversity within sectors and across the TMI sectors. The CBC remained prominent, but public television was being eclipsed by commercial television as the CBC’s share of all resources in the television ‘system’ slid from 46 percent in 1984 to half that amount by 2000 to just over twenty percent today (see the motion chart on CMCR website illustrating this point).

While gradual change defined the 1980s and early-1990s, things shifted dramatically by the mid-1990s and into the 21st century as two (and maybe three) waves of consolidation swept across the TMI industries. A few highlights help to illustrate the trend:

Wave 1 – 1994 to 2000: Rogers acquisition of Maclean-Hunter (1994). Peaks from 1998 to 2001: (1) BCE acquires CTV and the Globe & Mail ($2.3b); (2) Quebecor takes over Videotron, TVA and the Sun newspaper chain ($ 7.4b) (1997-2000); (3) Canwest buys Global TV ($800m) and Hollinger newspapers papers, including National Post ($3.2b).

Wave 2 – 2006-2007.  Bell Globe Media re-branded CTVglobemedia, as BCE exits media business. CTVglobemedia acquires CHUM assets (Much Music, City TV channels and A-Channel).  CRTC requires CTVglobemedia to sell City TV stations – acquired by Rogers (2007). Astral Media buys Standard Broadcasting. Quebecor acquires Osprey Media (mid-size newspaper chain)(2006). Canwest, with Goldman Sachs, buys Alliance Atlantis (2007) (Showcase, National Geographic, HGTV, BBC Canada, etc) – and biggest film distributor in Canada.

Wave 3 – 2010 – ? Canwest bankrupt. Newspapers acquired by Post Media Group, TV assets by Shaw.  BCE makes a comeback, re-buys CTV (2011) and bids for Astral Media in 2012, but fails to gain CRTC approval.

That the massive influx of capital investment drove consolidation across the telecom, media and Internet industries during these periods is illustrated in Figure 1 below.

Figure 1: Mergers and Acquisitions in Media and Telecoms, 1984 – 2011 (Mill$)

Sources: Thomson Financial, 2009; FPInformart, 2010; Bloomberg Professional; CRTC, Communication Monitoring Report.

Consolidation has yielded a fundamentally new type of media company at the centre of the network media ecology: i.e. the integrated media conglomerate. Extremely popular in the late-1990s in many countries around the world, many media conglomerates have since collapsed or been broken up (AOL Time Warner, AT&T, Vivendi, CBS-Viacom, and parts of NewsCorp, etc)(see, for example, Jin, 2011; Thierer & Eskelen, 2008; Waterman & Choi, 2010). The trend elsewhere has not, however, taken hold in Canada.

Indeed, in Canada, sprawling media conglomerates are still all the rage. Four such giants and a half-dozen other large but more specialized companies part their size make-up the core ‘big 10’ companies in the network media economy: Bell (CTV), Shaw (Global), Rogers (CityTV), QMI (TVA), CBC, Post Media, Cogeco, Telus, Astral, and Eastlink. A detailed chart of each by ownership, revenues, and sectors operated in is available here and will be addressed further in the next post.

Looking at media concentration from the vantage point of the ‘big ten’, the media have become more concentrated than ever. Their share of all revenues (excluding telecoms services) rose sharply in the 1990s and between 2000 and 2008 hovered steadily in the mid- to low-60 percent range. The big four’s share of the network media economy subsequently rose significantly to just under 68 percent in 2010 (after Shaw’s acquisition of Global) and rose again to just under 70 percent in 2011 (when Bell re-acquired CTV) — an all-time high and a substantial rise from 52% in 1992. The levels of media concentration in Canada are more than twice as high as those in the U.S., based on Noam’s analysis in Media Ownership and Concentration in America (2009).

Breaking the picture down into the following three categories and applying the CR and HHI tools provides an even better view of long-term trends:

  • ‘network infrastructure’ (wired and wireless telecom services, ISPs, cable, satellite and other OVDs);
  • ‘content’ (newspapers, tv, magazines, radio);
  • ‘online media’ (search, social, operating systems).

At the end of the post, I combine these again to complete the analysis of the network media industries as whole in a slightly different form.

The Network Infrastructure Industries

All sectors of the network infrastructure industries are highly concentrated and pretty much always have been, although Internet Access is a partial exception.

Table: CR and HHI Scores for the Network Infrastructure Industries, 1984 – 2011

CR & HHI Network Industries, 2011

Much the same can be said with respect to wireless services: they have consistently been highly concentrated, and still are until this day, despite the advent of four newcomers in just the past two years: Mobilicity, Wind Mobile, Public and Quebecor.CR4 and HHI measures for wired telecoms scores fell during the late-1990s as greater competition in wired line telecom services took hold. They reached their lowest level ever between 2000 and 2004 before after shocks from the collapse of the speculative dot.com bubble took out many of the new rivals (CRTC, 2002, p. 21). Competition grew more and more feeble for most of the rest of the decade before drifting modestly upwards since 2008. Concentration levels, however, still remain high by late-1990s, turn-of-the-century standards, as well as those of the CR and HHI measures.

Two competitors – Clearnet and Microcell – emerged in the late-1990s and managed to garner 12 percent of the market between them, but were then taken over by Telus and Rogers in 2000 and 2004, respectively. Whether the recent round of newcomers will fare any better it is still too early to tell, but with only 2.2 percent of the market as of 2011 they are a long way from the high tide of competition set a decade ago.

As the telecoms and Internet boom gathered steam in the latter half of the 1990s new players emerged to become significant competitors in Internet access, with four companies taking more than a third of the ISP market by 1996: AOL (12.1%), Istar, (7.2%), Hook-Up (7.2%) and Internet Direct (6.2 percent).

The early ‘competitive ISP era’, however, has yielded to more concentration since. Although the leading four ISPs accounted for a third of all revenues in 1996, by 2000 the big four’s (Bell, Shaw, Rogers & Quebecor) share had grown to 54 percent. Things stayed relatively steady at the level for most of the decade before inching upwards in the past few years to reach 57.1 percent in 2011.

HHI scores for internet access also moved upward between 1996 and 2000, but are still low relative to most other sectors. However, this is probably more an indicator of the limits of the HHI method in this particular case, since 93% of high-speed Internet subscribers rely on one or another of the incumbent cable or telecom companies’ ISPs to access the Internet, according to figures in the CRTC’s Communication Monitoring Report (p. 148).

ISP provision in Canada is effectively a duopoly, with the left over 6-7% of the market not dominated by the incumbents scattered among the 400 or so independent ISPs that still exist. This is a slight increase from last year, but it does not mark the return to competitive internet access.  Canada has relied on a framework of limited competition between incumbent telecom and cable companies for wiredline, wireless, internet access and video distribution markets and in all of these markets they dominate, with some other smaller rivals in each.

Cable, satellite and IPTV distribution is one of the only segments assessed where concentration has risen steadily from low levels in the 1980s (850) to the top of the scales in 1996 (2300), before drifting downwards by the turn-of-the-century to the low 2000s where it has remained ever since. It has dipped below that, to the 1900-range, for the last five years, but this is still at the very high end of the scale.

As I noted in the last post, the IPTV services of the incumbent telcos – Bell, MTS, Telus and SaskTel – are becoming a more significant factor in the distribution of television, after a slow and staggered start. By 2011, IPTV services accounted for 7.6 percent of the TV distribution market, based on my numbers, or 3.8 percent using CRTC data (see page 96).

While I have yet to get to the bottom of why this discrepancy exists, what can be said is that, on the basis of my figures, the growth of IPTV services has made small incursions into the incumbent cable and satellite service providers’ turf (i.e. Shaw, Rogers, Quebecor, Cogeco and Eastlink). However, this has done little more than nudge the CR and HHI scores, as the table above shows.

Over the last twenty-seven years, cable tv has become ubiquitous and new tv distribution infrastructures have been added to the fold – DTH in the 1990s, and now, slowly, IPTV. New players have emerged, but never have so few owned so much. New technologies have generally added to this and have not fundamentally disrupted the broad trajectory of development when it comes to tv distribution channels: more channels, and even some new players, but with more of the whole in the hands of the old. The wired society in Canada is probably the poorer for this.

The Content Industries

Until the mid-1990s, all aspects of the tv industry (i.e. conventional broadcast tv as well as pay and specialty channels) were moderately concentrated by HHI standards and significantly so by CR measures. Competition and diversity made some modest inroads from 1998 to 2004, but the trend abruptly reversed course and levels have climbed steadily and substantially since, and sharply in the last two years. Figure 2, below, shows the trend in terms of CR scores; Figure 3, in terms of the HHI.

Figure 2 CR Scores for the Content Industries, 1984 – 2011

 Figure 3 HHI Scores for the Content Industries, 1984 – 2011

The largest four commercial television providers control about 81% of all television revenues in 2011, up from 75% a year earlier. Levels of tv concentration were pushed to new extremes by

Shaw’s take-over of Canwest’s television assets in 2010 and Bell’s buy-back of CTV last year. The big four’s share of all tv revenue before these transactions in 2008 was 70%. A ten percent leap in concentration in two years is a lot.

If the CRTC had approved Bell’s acquisition of Astral Media – the fifth largest television company in Canada, ahead of Quebecor – the all-time high levels of concentration set in 2011 would have been surpassed by an even higher 89.5%. In contrast, the big four accounted for 61% of the tv biz in 2004, a time before major players such as Alliance Atlantis and CHUM were bought out by the now defunct Canwest and Bell/CTV 1 (circa 2000-2006), respectively.

The CR and HHI measures for tv were at all time lows in the 1990s. This was a time when newcomers emerged (Netstar, Allarcom), yet before the time when the multiple ownership groups that had stood behind CTV and Global for decades combined into single groups. The period was also significantly more diverse because the CBC no longer stood as a central pillar in tv and radio, while pay and specialty television channels were finally making their mark. Today, the latter are the crown-jewel in the tv crown.

Today the largest four tv providers after Bell and Shaw are: the CBC, Rogers, Astral, and QMI, respectively, and in that order. By 2011, these six entities accounted for ninety-five percent of the entire television industry. Similar patterns are replicated in each of the sub-components of the ‘total television’ measure (conventional television, pay and specialty channels), as the chart above illustrates.

In contrast, in 2004, the six largest players accounted for a little over three-quarters of all revenues. The run of HHI scores reinforces the view that the television industry is highly concentrated and has become markedly more so in just the past two years.

Like the cable industries, there has never been a moment when diversity and competition has flourished in the newspaper sector. Consolidation rose steadily from 1984, when the top four groups accounted for two-thirds of all revenues, to 1996, when they accounted for nearly three-quarters – a level that has stayed fairly steady since, despite periodic shuffling amongst the main players at the top. Levels declined slightly in 2011 from 2010, from 77% to 75%, likely on account of Postmedia’s decision to sell some of its newspapers (Victoria Times Colonist) and to cut publishing schedules at others.

Of all media sectors, magazines are least concentrated, with concentration levels falling by one-half on the basis of CR scores and two-thirds for the HHI over time. I have not been able to update the data for this sector for 2011, but there is little to suggest a need to change this view.

Radio is also amongst the most diverse media sectors according to HHI scores, but slightly concentrated by the C4 measure. In fact, in 2011, it became moreso, likely because of a shuffling of several radio stations between Shaw/Corus and Cogeco. Bell’s take-over bid for Astral – the largest radio broadcaster in Canada with 17.5% market share – would also have further pushed radio in the direction of concentration had it been approved last month by the CRTC. Had that scenario come to pass, levels of concentration would have still remained well-beneath the CRTC’s self-defined thresholds, but high by the CR measure and just moderately high by the HHI.

Online Media

So far, there’s little reason to believe that trends are any different in the online realm, as measures of the ISP segment showed. But what about other core elements of the increasingly Internet-centric media universe, such as search engines, social media, online news sources, browsers, and smartphone operating systems?

The trends are clear. Concentration in the search engine market continued to grow between 2010 and 2011, with the CR4 score rising from 94% to 97.6%. Google’s share of the market, however, seems to have plateaued, at just over 81 percent of this domain. Microsoft (8.6%), Yahoo! (4.2%), and Ask.com (3.7%) trail far behind, yielding a CR4 of 97.6% and an off-the-charts HHI of 6,683.

Figure 3: C4 Scores for the Search Engines, 2004 – 2011

Source: Experien Hitwise Canada. “Main Data Centre: Top 20 Sites & Engines.” last Accessed October 11, 2012.  http://www.hitwise.com/ca/datacenter/main/dashboard-10557.htm

Social media sites display a similar but not quite as pronounced trend, with Facebook accounting for 63.2% of time spent on such sites in 2010, trailed by Google’s YouTube (20.4%), Microsoft (1.2%), Twitter (0.7%), and News Corp.’s MySpace (.6%) (Experien Hitwise Canada, 2010). Again, the CR4 score of 86% and HHI score of 4426 reveal that social networking sites are highly concentrated.

Similar patterns also hold for other layers of the media ecology. The top four web browsers in Canada – Microsoft’s Explorer (52.8%), Google’s Chrome (17.7%), Firefox (17.1%) and Apple’s Safari (3%) – have a market share of over 90 percent (Comscore, 2011).  There is no data available for Canada with respect to smartphone operating systems, but US data shows that the top four players in 2010 accounted for 93 percent of all revenues: Google’s Android OS (29%), Apple’s iOS (27%), RIM (27%) and Microsoft’s Windows 7 (10%) (Nielsen, 2011).

However, not all areas of the internet and digital media environment, of course, display such patterns. The picture with respect to online news services, for instance, is significantly different. Between 2003 and 2008, the amount of time spent on online news sites nearly doubled from 20 to 38 percent, with most of the leading 15 online news sites simply being the extensions of well-established media companies: cbc.ca, Quebecor, CTV, Globe & Mail, Radio Canada, Toronto Star, Post Media, Power Corp. The other major sources included CNN, BBC, Reuters, MSN, Google and Yahoo! (Comscore, 2009; Zamaria & Fletcher, 2008, p. 176).

While that trend meant that attention was consolidating around a few online news sites, and those of traditional journalistic outlets in particular, it nonetheless seems clear that Canadians have diversified their news sources relative to the traditional news environment (newspapers, tv, radio, magazines).  On either the CR or HHI measure, online news fall under the concentration thresholds and are diverse relative to any of the other sectors, except magazines.

However, the fact that concentration levels edged upwards between 2004 and 2007, after the rapid “pooling of attention” that took place between 2003 and 2007 (see immediately above), suggests that a certain plateau might have been reached in terms of the range of sources people are using. Nonetheless, online news sources are not concentrated on the basis of the measures used here. The following table shows the results.

Table: Online News Sources, 2004 – 2011

News website 2004 (N=1482) 2007 (N =1306 ) 2011 (N=1651 )
CBC 10.6 18.3 13.8
Google 5.3 9.2 10.4
MSN / Sympatico 18.2 11 14.7
Yahoo 9.3 7.4 6.5
CNN 9.3 9.4 6.1
CTV 6.2 2.9
Canoe 2.4 7.6 2.9
Cyberpresse 3.5 3.3 3.9
Globe and Mail 4.1 5.9 3.6
BBC 4.9 2.8
Toronto Star 2.6 2.4 1.5
Global 2
Other 32.6 14.4 31.1
CR4 43.4 45.9 45.4
HHI
97.9 100 100.2

Source: Table calculated by Fred Fletcher, York University, from the Canadian Internet Project Data sets (Charles Zamaria, Director).  Reports on the 2004 and 2007 surveys are available at http://www.ciponline.ca.

The Network Media Industries as a Whole (excluding wired and wireless telecoms)

Combining all the elements together yields a birds-eye view of long-term trends for the network media as a whole. As Figure 4 below shows, the HHI score across all of the network media industries is not high by the criteria set out earlier, but the long-term upward trend is clear and significant.

Figure 4: HHI Scores for the Network Media Industries, 1984 – 2010

 

While the HHI for the network media fell in the 1980s and early-1990s, by 1996 trends had reversed and levels were higher than they were a dozen years earlier. Thereafter, the number rose steadily to close to 600 in 2000, where it hovered for several years before falling again in 2008. Since then, however, the HHI score has shot upwards, rising from 510 in 2008 to 623 after Shaw acquired Global and then to 739 once Bell re-acquired CTV after having sold down its majority stake a few years earlier.

The effect of the Bell Astral deal would have been significant in terms of the network media as a whole, raising the HHI score to over 800 – an all time high. This is still low by HHI standards, but we must bear in mind that we are talking about concentration across the entire sweep of the network media industries, not just a random assortment of a few sectors.

The CR4 standard, as shown in Figure 5 below, reveals the trend even more starkly, with the big four media conglomerates – Bell, Shaw, Rogers & QMI – accounting for more than half of all revenues in 2011, a significant rise in a vastly larger media universe from just under forty percent held by the big four twenty-seven years earlier in what was a Lilliputian pond by comparison. While still only moderately concentrated by the CR4 standard, this is for all media combined.

In each and every single sector of the media that the big four operate, they dominate, as the earlier review of CR and HHI scores illustrated. Moreover, the trend in both scores is up, significantly so in the past three years from a CR4 of around 40% to its current level of just over 50%. If this really was a golden digital media age, as some like to contend, that number should be going firmly in the opposite direction.

Figure 5: CR 4 Score for the Network Media Industries, 1984 – 2010

Concluding Thoughts 

Several things stand out from this exercise. First, we are far from a time when studies of media and internet concentration are no longer needed. Indeed, theoretically-informed and empirically-driven research is badly needed because there is a dearth of quality data available and because, one after another, the press of events and specific transactions – Bell Astral in 2012, but Bell’s re-acquisition of CTV the year before and Shaw’s acquisition of Global in 2010 – demands that we have a good body of long-term, comprehensive and systematic evidence ready-to-hand.

This kind of data is still very hard to come by and data collection for 2011 reconfirmed that at every step of the way. The CRTC still needs a dramatic overhaul of how it releases information and of its website, as David Ellis has recently argued so eloquently. The underlying data sets it includes in seminal publications like the Communications Monitoring Report, Aggregate Annual Returns, and Financial Summaries needs to be made available in a downloadable, open format that allows people and researchers to use it as they see best. The regulated companies themselves must also be made to be more forthcoming with data relevant to the issues, and not less, as they so strongly desire.

The trajectory of events in Canada is somewhat similar to patterns in the United States. Concentration levels declined in the 1980s, rose sharply in the late-1990s until peaking around 2000. However, it would appear that whereas in the U.S. a process of deconsolidation set in thereafter, with the obvious exception of Comcast’s blockbuster acquisition of NBC-Universal last year, concentration levels in Canada have climbed, and steeply so, in the past three or so years.

Current media concentration levels in Canada are roughly two-and-a-half times those in the U.S. and high by global standards (Noam, 2009). Moreover, large media conglomerates straddle the terrain in Canada in a manner that is far greater than in any of the other thirty countries studied by the IMCR project, including the U.S., Germany, Japan, Australia, the UK, and so on, where media conglomerates are no longer all the rage as they once were a decade ago.

The assets from the bankrupt Canwest have been shuffled in recent years, and some significant new entities have emerged (e.g. Channel Zero, Post Media, Remstar, Teksavvy, Netflix, The Mark, Tyee, Rabble.ca, Huffington Post). The overall consequnence is that we have a set of bigger and structurally more complicated and diverse media industries, but these industries have generally become far more concentrated, not less.

There is a great deal more that can and will be said about what all this means, but in my eyes it means that concentration in no less relevant in the “digital media age” of the 21st century than it was during the industrial media era of centuries’ past.

The Growth of the Network Media Economy in Canada, 1984 – 2011

Has the media economy in Canada become bigger or smaller over time? Which sectors are growing, which are stagnating and which are in decline? These are the questions addressed by this post.

To answer these questions, I will examine the following key sectors of the network media economy: wired line & wireless telecoms; broadcast TV; subscription and pay TV; cable, satellite & IPTV distribution; newspapers; magazines; radio; music; Internet access and internet advertising? I will also hone in on rising new segments (IPTV) and others that appear to be in long-term decline (newspapers). I will also examine whether the media economy in Canada is big or small relative to global standards.

The post kicks-off a three part series that I’ll unfold over the next few weeks. Similar to what I did last year, the next post will examine telecom, media and internet (TMI) concentration, while the third will look at who owns the leading telecom-media-internet TMI companies in Canada. The goal is to offer an empirically and theoretically-grounded, and historically informed, portrait of the development and current trends in the network media economy over the period from 1984 until 2011.

Canada’s Network Media Economy in a Global Context

While often cast as a dwarf amongst giants, the network media economy in Canada is in fact the ninth largest in the world, with revenues of just over $35 billion in 2011 (excluding wired and wireless telecoms). The media economy in Canada has also grown fast relative to other media economies. The twelve largest national media economies worldwide and their development over time are depicted in Table 1 below.

Table 1: Canada’s Ranking Amongst 12 Biggest Network Media, Entertainment and Internet Markets by Country, 2000 – 2011 (millions USD) [i]

 

The media economy in Canada is obviously small relative to the U.S., at one-tenth the size, but amongst the twelve biggest media economies in the world, as the above table shows, falling right after Brazil and just before Australia, South Korea and Spain. The media economy in Canada, like those in Germany, the UK, and Australia, largely stagnated for two years following on the heels of the Anglo European financial crisis (2007ff), but for the most part things have turned around since 2010. In contrast, media economies in the U.S., Japan, Italy and Spain actually shrunk during this time before once again picking up in 2010, except in Japan and Spain. Overall, the network media economy in Canada has fared well during the economic downturn years.

In sharp contrast to much of Europe and North America, the media economies of China, Brazil and South Korea continued to grow at a fast pace. Indeed, the media economies in these countries and a few others such as Turkey and Russia have been going through something of a ‘golden media age’, with most media, from internet access, to the press, television, film and so on undergoing an unprecedented and extended period of fast-paced development (OECD, 2010).

The Network Media Economy in Canada: Growth, Stagnation or Decline?

Turning our attention solely to Canada, the figure below shows that the network media economy has grown enormously over the past few decades, from $19.4 billion in 1984 to nearly $71 billion in 2011 (current $). In inflation-adjusted dollars, the network media economy grew from $37.5 billion in 1984 to just under $70 billion last year (2010$). The figure below charts the trends (you can access the underlying data sets by clicking on the Media Industry Data tab at the Canadian Media Concentration Research Project).

Figure 1: The Growth of the Network Media Economy in Canada, 1984 – 2011 (Mill$ unadjusted for inflation)

Sources: see the CMCR Project’s methodology primary.

The vast expansion of the media economy has been driven by the addition of new media – wireless, internet access, pay and specialty tv services, internet advertising. The most significant source of growth is from the network connectivity elements (e.g. wireless, ISPs, IPTV, cable and satellite), especially after the mid-1990s.

The Network Connectivity Segments

The connectivity segments – the pipes, bandwidth and spectrum used to connect people to one another and to devices, content, the internet, and so forth — grew from $13.9 billion to $51.5 billion between 1984 and 2011. In real dollar terms, revenue grew from $26.8 billion to $50.5 billion. The following table shows the trends.

Table 2: Revenues for the Network Connectivity Industries, 1984 – 2011 (mill$)

Accounting for just under three-quarters of revenues across the media economy as a whole, the network connectivity sectors are the real fulcrum of the media economy in Canada, as is the case generally in most of the world. This is why Bell, Rogers, Shaw, Quebecor, Telus, SaskTel, MTS Allstream, Eastlink, Cogeco, etc. are so central to the media economy, to say nothing of the holdings that the biggest among them have in the media content sectors of the network media ecology.

While some might think that the over-sized weight of these sectors is of recent vintage, this is not true. In fact, the connectivity sectors’ share of the network media economy in 2011 was not even two percentage points more than twenty-seven years ago: 72.8 percent versus 71.2 percent, albeit within the context of a vastly larger media economy.

Why? One reason is TV, which is still very much at the centre of the network media universe (see below).

Not all network connectivity segments have grown and this is especially true of plain old wiredline telephone services. Wiredline telecom revenues peaked in 2000 at $21.2 billion and have fallen steadily ever since to reach $16.4 billion in 2011. The decline, as both figure 1 and the data in Table 2 above show, has been steep and unrelenting.

As plain old telephone services (POTS) has gone into decline, however, some pretty awesome new stuff (PANS) has come along to more than pick up the slack.  The best example is wireless cell phone services. Wireless revenues were $19.3 billion in 2011 – three-and-a-half times revenues at the beginning of the decade ($5.4 billion), and up significantly from $18 billion in 2010 and $16.2 billion in 2008. Unlike a few other areas (see below), wireless revenues did not suffer from the economic downtown either after the collapse of the dot.com bubble in 2000 or in the face of the Anglo-European financial crisis (2007ff).

Internet access displays similar patterns but for not as long or to the same extent. Internet access revenues last year were $7.2 billion, up substantially from $6.2 billion in 2008 and quadruple what they were at the turn-of-the-21st century ($1.8 billion).

The most notable development over the past year is the growth of Internet Protocol TV (IPTV) services, which are essentially the incumbent telcos’ managed internet-based tv services: e.g. Telus, Bell, MTS Allstream, SaskTel, and Bell Aliant.

IPTV services are often seen as important because the entry of the telcos into tv distribution promises more competition for incumbent cable companies and because IPTV is often associated with efforts to bring next generation, fiber-based internet networks closer to subscribers, either to their doorstep or nearby neighbourhood nodes. If the distribution of television is essential to the take-up of next generation networks, as I believe it is, then IPTV will be part of the demand drivers for these networks.

According to the CRTC, IPTV revenues were $322.3 million in 2011, up greatly from $207.8 million a year earlier and triple the amount of 2008.  The CRTC also states that there were 657,300 IPTV subscribers in 2011 versus 416,900 in 2010 and 225,000 in 2008. By any standard, this would appear to be impressive growth.

These numbers, however, still seem low.  For example, published data from Telus, MTS Allstream, SaskTel, and Bell Aliant show that they have substantially more subscribers than the CRTC identifies (775,000 vs 657,300), and this is without including Bell. Add another estimated 128,000 subscribers for Bell’s Montreal and Toronto-centric IPTV service and the number of subscribers rises to approximately 903,000. Table 3 below shows the trends in terms of subscribers.

Table 3: The Growth of IPTV Subscribers in Canada, 2004 – 2011[ii]

2004 2006 2008 2010 2011
Bell Fibe TV (1) 83,000 127,644
Bell Aliant (2)   49,000 77,000
Telus (3)  78,000 314,000 509,000
MTS Allstream (4) 32,578 66,093 84,544 89,967 95,476
SaskTel (5) 25800 51277 70463 85537 93,960
Total IPTV Connections 58,378.0 117,370  233,007 621,504 903,080

I explain some reasons for this large discrepancy in the endnote to Table 3 and will write another post to examine the issues more thoroughly. For now, however, I want to note that, not surprisingly, given that my estimate for subscribers is much higher than the CRTC’s, that my estimate for IPTV revenues is also much higher than the figure the Commission states. I estimate that IPTV revenues in 2011 were $650.6 million — more than four times the amount in 2008 ($142.7 million) — and up greatly from $423 million the previous year. Table 3 below illustrates the trends.

Table 4: The Growth of IPTV Revenues in Canada, 2004 – 2011 (mill$)[iii]

2004 2006 2008 2010 2011
Bell Fibe TV (1) 60.2 91.0
Bell Aliant (2) 33.6 54.9
Telus (3) 50.1 215.3 364.8
MTS Allstream (4) 10.8 32.2 50.6 59.0 71.5
SaskTel (5) 8.6 25 42 55.1 70.3
19.4 57.2 142.7 423.2 650.6

The growth of the IPTV services is significant for many reasons. First, it suggests that the telcos are finally making the investments needed to bring fiber networks closer to their subscribers, at least on a large enough scale that their efforts can be measured, despite being hemmed in by opaque reporting measures in some cases (Bell Aliant, Telus) and a complete lack of disclosure in others (Bell).

Second, the addition of IPTV as a new television distribution platform expands the size of the “BDU sector” (cable, satellite and IPTV), while bringing the telcos deeper into the cable companies’ dominion. By 2011, IPTV services accounted for 7.6 percent of the TV distribution market, based on my numbers, or 3.8 percent using CRTC data. I’ll address whether or not this has significantly increased competition and lessened concentration in the next post.

While IPTV services finally appear to be taking off, we must remember several things. First, it has been the small prairie telcos, followed by Telus, which have taken the lead in deploying IPTV. For Sasktel, Telus and MTSAllstream, IPTV revenues now make up a significant 11.9 percent, 8.5 percent and 6.6 percent, respectively, of their revenues from fixed network access services (Wiredline + ISP + Cable).

Bell lags far behind, with only 1.4 percent of its revenues coming from IPTV services, including Bell Aliant, in 2011. Indeed, Bell only launched IPTV via its affiliate Bell Aliant in 2009, before targeting high-end districts of Montreal and Toronto the next year, half-a-decade after MTS Allstream and SaskTel began doing so in the prairies.

In other words, innovation and investment is coming from small telcos on the margins and Telus, not Bell. This replays a long-standing practice in telecoms for new services to start out as luxuries for the rich and well-to-do before a mixture of public, political and competitive pressures turn them into affordable and available necessities for the masses. From the telegraph to fiber-based next generation Internet, the tendencies, conflicts and lessons have remained much the same.

Generally speaking, IPTV remains under-developed as a critical part of the network infrastructure in Canada, accounting for only 2 percent of the $32.2 billion in fixed network access revenues (see Table 2).  OECD data confirm the point, with Canada ranked 20 out of 29 countries in terms of fiber-based connections to the premises as a proportion of all broadband connections available.

In Canada, just over one percent of broadband connections use fiber, while the OECD average is 10 percent (similar to levels at Sasktel and Telus). In many ways, the poor performance of Bell over the past half-decade has dragged Canada down in the global league tables as a whole. In countries at the high end of the scale (Sweden, Slovak Rep., Korea, Japan), thirty to sixty-plus percent of all broadband connections are fiber-based. The following figure illustrates the point.

Source: OECD (2011a). Broadband Portal. www.oecd.org/…/0,3746,en_2649_34225_38690102_1_1_1_1,00.html.

The Network Content Industries

In the remainder of this post I will turn my attention to the content industries (broadcast tv, pay and specialty tv, radio, newspapers, magazines, music and internet advertising). For the most part, they too have grown substantially, although the picture has become more mixed than in the network connectivity sectors in the past few years.

In 1984, total revenue for the content industries was $5.6 billion; it was $19 billion in 2011. The growth overall appears to have been steady throughout this period, with no discernible major uptick or downturn at any given point in time. Table 4, below, depicts the trends.

Table 4: Revenues for the Content Industries, 1984 – 2011 (mill$)

Despite much hand-wringing to the contrary, television remains at the very centre of the increasingly internet-centric media environment. Indeed, this is true of all three of the main components of the television industries: conventional broadcast tv, specialty and pay tv services as well as the cable, satellite and IPTV services that underpin TV distribution for the vast majority of Canadians.

Many have argued that television is dying as audiences shrink and advertising revenues is diverted to the internet. Indeed, the dreaded “TV tax” (local programming improvement fund, or LPIF) was put into place by the CRTC in 2008 precisely on the basis of such arguments, before being rescinded by the regulator in 2012 and to be phased out completely by 2014. The rise of over-the-top services such as Netflix only further compounded the woes, so the story goes.

Yet, the evidence suggests that television is, for the most part, not struggling to survive but actually thriving. Broadcast television revenues did decline between 2008 and 2009, but only modestly, and were quickly restored and on the rise again by 2010. In 2008, broadcast TV revenues were roughly $3,381.4 million (including the CBC annual appropriation). They fell in 2009, but by 2010 had risen to $3,405.6 million. Revenues were just under $3,500 million last year.

Focusing solely on inflation-adjusted dollars changes the picture somewhat, but only slightly. Seen from this angle, broadcast television revenues were roughly $3,454.7 million in 2000, peaked at $3,518 million in 2005 and have drifted down slightly since, where they have stayed fairly steady around $3,400 million since 2008.

Small decline? Yes. But a calamity? Hardly.

That the TV in crises choir is wide of the mark becomes even clearer once we widen the lens to look at the fastest growing areas of television: i.e. specialty and pay tv services (HBO, TSN, Comedy Central, etc.) and television distribution. In terms of specialty and pay television services, these have been fast growing segments since the mid-1990s and especially so over the past decade. Specialty and pay-tv services eclipsed conventional broadcasting as the largest piece of the TV pie in 2010, when revenues reached $3,459.4 million. Last year, that figure grew to $3,732.1 million.

Adding both conventional as well as specialty and pay tv services together to get a sense of ‘total television’ revenues as a whole yields an unmistakable picture: with revenues of $7,224 million in 2011, television is not dead or dying. It is thriving.

TV remains at the centre of the internet-centric media universe and is growing fast. In fact, Total TV revenues quadrupled from $1.8 billion in 1984 to $7.2 billion in 2007; using ‘real dollars’, total TV revenues doubled from $3.5 billion in 1984 to just over $7 billion last year — hardly the image of a media sector in crisis.

Add to this, cable, satellite and IPTV distribution and the trend is more undeniable. In these domains, as indicated earlier, the addition of new services, first DTH in the 1990s, followed recently by IPTV, and steady growth in cable TV, means that TV distribution has grown immensely, in essence expanding ten-fold from revenues of $716.3 million in 1984 to $8,588.3 million in 2011 (in current dollars).

Altogether, adding “Total TV” and TV distribution revenues together, these segments of the network media industries accounted for just over $15.8 billion in 2011. As a matter of fact, the weight of all television segments in the network media economy has risen considerably over time, from accounting for 13.2 percent of all revenues in 1984, to 18.4 percent in 2000 and to 22.3 percent in 2011.

Of course, this does not mean that that life is easy for those in the television industries. Indeed, all of these sectors continue to have to come to terms with an environment that is becoming structurally more differentiated because of new media, notably IPTV and over-the-top (OTT) services such as Netflix, as well as significant changes in how people use the multiplying media at their disposal.

While incumbent television providers have leaned heavily on the CRTC and Parliament to change the rules to bring OTT services into the regulatory fold, or to weaken the rules governing their own services (see Bell’s submission in its bid to take over Astral Media, for a recent example), OTT services are still minor fixtures in the media economy. For example, based on roughly 1.2 million subscribers , Netflix’s annual revenues were an estimated $115 million in 2011 – about 1.6 percent of “Total TV” revenues.  Recent reports by Media Technology Monitor and the CBC as well as the CRTC’s (2011) Results of the Fact Finding Exercise on Over-the-Top Programming Services lead to a similar conclusion.

Part of the more structurally differentiated network media economy is also illustrated by the rapid growth of internet advertising. In 2011, internet advertising revenue grew to $2.6 billion, up from just over $2.2 billion a year earlier and $1.6 billion in 2008. At the beginning of the decade, internet advertising accounted for a comparably paltry $110 million, but has shot upwards since to reach current levels, demonstrating both fast growth as well as the fact that, like wireless services, internet advertising has not been significantly affected by downturns in the general economy.

To be sure, these trends have given rise to important new actors on the media scene in Canada, notably Google and Facebook, among others, who account for the lion’s share of internet advertising revenues. Indeed, based on common estimates of Google’s share of internet advertising revenues, the internet giant’s revenues in Canada in 2011 were in the neighbourhood of $1,300 million. This is indeed significant, enough to rank Google as the eighth largest media company operating in Canada by revenues, just after the CBC and SaskTel but ahead of Postmedia and MTS Allstream.

For its part, Facebook had an estimated 17.1 million users in Canada at the end of 2011. Based on estimated revenues of $9.51 per user, Facebook’s advertising revenue can be estimated at $162.6 million in 2011, or 6.3% of online advertising revenue – an amount that give it a modest place in the media economy in Canada but which would not put it even close to being on the list of the top twenty or so TMI companies in this country.

While it is commonplace to throw digital media giants into the mix of woes that are, erroneously, trotted out as bedeviling many of the traditional media such as television in Canada, the fact of the matter is that Netflix’s impact on television revenues is negligible, while those of Google and Facebook are mostly irrelevant.

Where they may be more important, however, is in three other areas where the portrait is not so rosy: music, magazines and newspapers.  With respect to music, it is not advertising that is at issue, but rather the manner in which online digital distribution, legal and illicit, as well the culture of linking is affecting the music industry. At some point I will write a full-length post on each of these sectors, but for now a simple sketch will have to do.

Music

While many have held up the music industry as a poster child of the woes besetting ‘traditional media’ at the hands of digital media, the music industry in Canada is not in crisis, although the picture is mixed. Using current dollars, the sum of all of the main components of the music industry – recorded music, digital sales, concerts and publishing royalties – the music industry grew from $1,181.9 million in 2000 to a high of $1,373.7 in 2008.

Music industry revenues across these four segments have generally stayed remarkably steady around the 2008 level, up to and including 2011, when revenues were $1357.7 million. There is no crisis.

The picture is a little more troubling, however, when we switch the metric to ‘real dollars’, which results in revenues reaching a high of $1.5 billion in 2004 and a decline from there to $1.316 billion last year — a significant decline, yes, but not a calamity, and with the trend clearly towards a floor being in place below which further declines in the future will be unlikely or very modest.

Radio

Radio stands in much the same position as the music industries. Revenues continued to grow until reaching a peak in 2008 of $1,990 (including CBC annual appropriation), a level at which they have stayed relatively flat since, with revenues of $1,949.5 in 2011 (current dollars). Change the measurement from current dollars to inflation-adjusted, real dollars, however, and the picture changes slightly, with a gradual decline from just over $2 billion in 2008 to roughly $1.9 billion in 2011.

Magazines

Magazines appear to stand in the same position as the music and radio sectors as well, although I have not been able to update my revenue data for the sector for 2011. Yet, extrapolating from trends between 2008 and 2010 to obtain an estimate for 2011, revenues have declined slightly on the basis of current dollars (from 2,394 million in 2008 to $2,135 in 2011). The drop is more pronounced when using real dollars, with a significant drop of about sixteen percent from $2,457.8 million in 2008 to $2,071.1 last year.

Newspapers

Perhaps the most dramatic tale of doom and gloom within the network media economy, at least in terms of revenues, is from the experience of newspapers. Readers of this blog will know that in earlier versions of the “Network Media Economy in Canada” post, and other posts, I have been skeptical of claims that journalism is in crisis. I still am, and believe, much along the lines of scholars such as Yochai Benkler, that we are in a period of heightened flux, but with the emergence of new commercial internet-based members of the press (the Tyee and Huffington Post, for example), the revival of the partisan press (e.g. Blogging Tories, Rabble.ca) as well as non-profits and cooperatives (e.g. the Dominion) and the rise of an important role for citizen journalists indicating that journalism is not moribund or necessarily in a death spiral. In fact, these changes may herald a huge opportunity to improve the conditions of a free and responsible press.

At the same time, however, I also believe that traditional newspapers, whether the Globe and Mail, the Toronto Star or Ottawa Citizen are important engines in the overall network media economy, serving as the content factories that produce news, opinion, gossip and cultural style markers that have the ability to set the agenda and whose stories cascade across the media as a whole in a way that is all out of proportion to the weight of the press in the media economy. In other words, the press still originates far more stories and attention that the rest of the media pick up, whether television or via the linking culture of the blogosphere, than their weight suggests. Thus, problems in the press could pose significant problems for the media, citizens and audiences as a whole.

While I have been reluctant to see newspapers as being in crisis, mostly because in previous years I have felt that the trends had not been long enough in the making to draw that conclusion, and also because I think many of the wounds being suffered by the newspaper business, have been self-inflicted out of a mixture of hubris and badly conceived bouts of consolidation, I’m now ready to change my tune when it comes to the state of newspaper revenues.

Newspaper revenues have plummeted. In current dollar terms, newspaper revenues peaked in the years between 2000 and 2006 at between $5.5 and $5.7 billion. They have fallen substantially since to just under $4 billion last year – a decline of 30 percent or so. Indeed, revenues fell by 9 percent just between 2010 and 2011.

In real dollar terms, the fall is more pronounced yet. Newspaper revenues, on the basis of this measure, shrunk by about $1.7 billion – or almost a third (30.7 percent) – in the five-year period between 2006 and 2011. This is the most clear cut case of a medium in decline out of the ten sectors of the network media economy reviewed in this post.

Some Concluding Comments and Observations

Several observations and conclusions stand out from the preceding analysis. First, the network media economy has grown significantly over time, whether we look at things in the short-, medium- or long-term.

Second, while the network media economy in Canada may be small relative to the U.S., it is large relative to global standards. In fact, it is the ninth biggest media economy in the world.

Third, while most sectors of the media have grown substantially, and the network media economy has become structurally more differentiated and complex on account of the rise of new segments of the media, a few segments have stagnated in the past few years (music, radio, magazines). It is also now safe to say that two sectors appear to be in long-term decline: the traditional newspaper industry and wiredline telecoms.

The next and last table of this post gives a snapshot of the state of affairs across the network media economy as things stood at the end 2011 by placing each of the sectors covered in this post in one of three categories: growth, stagnation and decline.

Table 5: The Network Media in Canada: Sectors Experiencing Growth, Stagnation or Decline

Growth Stagnation Decline
Wireless Telecoms Broadcast TV Wiredline Telecoms
Internet Access Music Newspapers
Cable & Satellite Radio
IPTV Magazines.
Pay & Specialty TV
Internet Advertising

[i] Sources:  PWC (2012), Global Entertainment and Media Outlook for all countries and for all segments, except the subcomponents of publishing rights and live concerts for the music sector, which is based on IDATE DigiWorld Yearbook 2009. I have excluded video games, book publishing, and business-to-business sectors from the PWC figures to make the country profiles correspond to the definition of the network media economy in Canada used here and by the Canadian Media Concentration Research Project. Canadian sources as listed in the CMCR project’s methodology primary, but generally based on the CRTC’s Communications Monitoring Report as well as Statistics Canada’s Cansim tables and publications for the sectors that make up the network media economy.

[ii] I use BDU ARPU because the CRTC’s estimate for IPTV ARPU of $40.86 appears too low alongside its estimates for BDUs ($59.41). with which IPTV services compete, as well as figures published by MTS Allstream in its Annual Reports that set their IPTV ARPU at $62.38. Sources: (1) Bell’s revenues are based on the CRTC’s Aggregate Annual Return. Dividing this number by the CRTC’s annual ARPU estimates for BDUs of $59.41/month in the 2011 Communications Monitoring Report (p. 96) yields 127.6 thousand subscribers for 2011. (2) Bell Aliant’s subscriber numbers are from its Annual Report (p. 2). Revenue figures arrived at by multiplying subscriber numbers by ARPU estimates for BDUs ($59.41/month in 2011) stated in the CRTC’s 2011 Communications Monitoring Report (p. 96); (3) Telus‘ subscriber numbers are from its 2011 Annual Report (p. 10) and 2010 Annual Report (p. 5). Revenue figures arrived at through same method as above. This number probably inflates the Telus figures slightly because it includes the company’s DTH satellite TV service that it resells for Bell, but Telus officials I have spoken to assure me that true IPTV subscribers are the vast majority; (4) MTS Allstream’s subscriber and ARPU figures from its 2011 Annual Report (pp. 3, 16) and multiplied by an ARPU of $62.38, as per its Annual Report. Its 2008 Annual Report lists subscriber numbers from 2004 (p. 62); (5) Sasktel’s data from its 2011 Annual Report (pp. 14, 29). Previous years from 2010 Annual Report (p. 45) and 2006 Annual Report(p. 49). SaskTel ubscriber numbers, except for 2008, are multiplied by MTS ARPU to arrive at total revenues because SaskTel does not present revenue figures for its IPTV service on a stand-alone basis and because MTS is most comparable to SaskTel vs CRTC’s average ARPU. Note: SaskTel revenue figures for this table revised on November 19th. 

[iii] Ibid.

CRTC Kills Bell Astral Deal: What Happened and Why?

On Thursday this week, the CRTC killed the Bell Astral deal (news release, full decision). The decision was entirely unexpected by anyone, including me, although all along I have argued that Bell’s bid to acquire Astral Media, the 9th largest media company in Canada, gave the CRTC ample ground to do exactly what it did. I also argued that it was the right thing to do, and that the CRTC should stop Bell’s take-over bid for Astral “dead in its tracks”.

Several things stand out from the decision. First, it sets a precedent. To find the closest parallel to this case, we’d have to reach back more than a quarter-of-a-century to 1986 when the regulator quashed a bid by Power Corporation – owner (then and now) of Quebec-based newspaper group, Gesca – from acquiring Tele-Metropole, the cornerstone of what eventually became TVA: the “largest and most important private French-language television station in Quebec and one of the leading Canadian television stations in terms of local production”, as the decision noted at the time.

Second, the decision makes crystal clear that the CRTC, under new chair, J.P. Blais, will take a large view of media consolidation rather than its typically flinty-eyed view of the world. The CRTC will also look carefully at questions of market share and media concentration, and do so not just using audience ratings as its preferred method but also revenues in ways that capture trends within specific media sectors (e.g tv) and across the media as a whole (see paras 29, 51-54).

Of course, numbers are never determinative, according to the CRTC (see para 52), and nor should they be, I would argue. There is no ‘magic number’ upon which things turn, but measuring media concentration within and across the relevant telecom, media and internet sectors, across time as well as in relation to relevant trends elsewhere in the world, is an essential prelude to the conversation that needs to be had. The Commission now seems more ready than it has been in a long, long time to have that conversation. This is a very good thing.

Third, the CRTC rejected Bell’s claim about the threat of OTT services offered by Netflix, Apple, Amazon, etc., on the grounds that they were exaggerated. As the Commission (2011c) stated less than a year ago in its Results  of  the  Fact-Finding  Exercise  on  Over-­the-­Top  Programming Services,

“. . . the evidence does not demonstrate that the presence of OTT providers in Canada and greater consumption of OTT content is having a negative impact on the ability of the system to achieve the policy objectives of the Broadcasting Act or that there are structural impediments to a competitive response by licensed undertakings to the activities of OTT providers” (p. 8).

That evidence has not changed and the CRTC said so in this decision (para 62). In 2008, according to a Media Technology Monitor/CBC study about 3 percent of tv viewing occurred on the Internet (MTM/CBC, 2009, p. 49). According to their most recent study, “only 4% of Anglophones report only using new platforms to watch TV” (MTM/CBC, 2012, p. 4).

Netflix’s annual revenues, based on 1.2 million subscribers, can be an estimated $115 million in 2011, or about .7% of the total television universe (including BDUs). To this we can estimate that Google’s revenues in Canada last year would have been roughly $1.3 billion, or half of online advertising revenue (IAB, 2011). While that may have had an impact on the newspaper and magazine industries, there is no evidence it has done anything of the sort with respect to the broadcasting industry.

The CRTC also cast a jaundiced eye on Bell’s proposal for BellFlix – a new online, on-demand tv service for its subscribers — that would, so Bell argued, allow a combined BellAstral to effectively compete with foreign OTT operators like Netflix. Bell sprung the proposal on the CRTC on the opening day, but the CRTC didn’t buy it because, first, eleventh hour proposals do not follow the rules. The deadline for complete applications was August 9th, not Day 1 of the hearings.

More importantly, an online “TV Anywhere” service is now a requirement of the internet-centric media world, not a bolt on somehow dependent on Bell’s take-over of Astral (para 61). In other words, Bell will have to launch such a service regardless, if it wants to meet current realities and consumer demand.

Fourth, the CRTC rejected Bell’s argument that there was no need to worry about vertical integration because, “This issue was recently exhaustively canvassed by the Commission in its Vertical Integration proceeding” (Bell, Supp. Brief, para 59). In fact, the CRTC observed that consumer groups, non-integrated distributors (Telus, MTS Allstream, SaskTel, Cogeco, Eastlink, etc.) as well as independent broadcasters (VMedia, APTN, Zoomer, etc.) “filed evidence and argument” that cast significant doubt about the capacity of the new vertical integration rules to effectively constrain “BCE’s alleged anti competitive behaviour with respect to program rights negotiations and product launches” (emphasis added, para 32; all submissions can be found here). Put simply, Bell has been acting as a brute ever since it re-acquired CTV just last year, and for this it has now paid the price.

More importantly for the long-run is what the CRTC had to say about consolidation and vertical integration en route to squashing the deal. First, and to avoid over-stating the significance of what is going on, the CRTC noted that it has long been a fan of consolidation and vertical integration, and still is. Second, and with a big however, it also picked up on a point that I have made many times: greater consolidation and vertical integration has not been an unalloyed blessing (far from it); in fact, the process has been thrown into reverse in many other countries around the world.

In the U.S., the results of de-convergence have been remarkable. Aside from the mega-merger of Comcast and NBC-Universal last year, media companies have been beating a hasty retreat from vertical integration and “convergence”. The number of pay and specialty tv channels controlled by cable companies fell dramatically from the 50-55% range in the early 1990s to 15% by 2006 (Thierer & Eskelsen, 2008, pp. 55-56; Waterman & Choi, 2010).

As Viacom-CBS Chairman Sumner Redstone declared in 2005, “the age of the conglomerate is over” (Sutel, 2005). A year later, Time Warner President Jeffrey Bewkes called claims of convergence and synergy “bullsh*t”! Mainstream Media economist Alan Albarran (2010) summed up the lessons as follows: “Looking back, vertical integration was not a very successful strategy for media companies, and it was a very expensive strategy – costing billions of dollars over time. In the 21st century, the early trends have been to shed non-core assets that distract from the base of the company . . .” (Albarran, p. 47). Further examples could be piled up like leaves in autumn.

With this decision, the CRTC put Bell and the rest of the telecom and media industries on notice that claims about vertical integration and consolidation will no longer be taken as an article of faith, although it will still look upon such claims fondly.  This is critical and while it could put a halt to any more ‘blockbuster deals’ for the time being, I am more inclined to think that it’s too early to tell.

Fifth, the CRTC rejected Bell’s bid for Astral on the grounds that it did not pay sufficient attention to radio (paras 57&60).

Lastly, Bell’s benefits package was roundly criticized and rejected for being self-serving. Too many of the benefits would flow to activities that Bell was already doing (e.g. its otherwise laudable Mental Health promotion campaign) or to services that it had already been directed by the regulator to invest in, i.e. expanding broadband access in the North by its subsidiary Northwestel (para 59).

There is a bigger implication in this latter point too, however, a not-too-subtle slap not at Bell, but rather the independent television and production sector, J-Schools and others who line up at the trough for their share of the public benefits package, all the while soft-peddling their criticisms of ownership consolidation as a prerequisite to doing so, as the Canadian Media Producers Association and Canadian Writers Guild, for instance, did in this case and every other one like it in the past decade.

The CRTC’s decision, thus, interrupts the well-known cycle whereby independent television and film production community pull their punches in ownership cases in the hope that they will be in the acquiring company’s good books when it puts together its “public benefits package” as it seeks regulatory approval. This has created a seriously distorted and sordid cycle of dependency in which higher concentration and problems in the long run are sacrificed for short-term gains. It is essentially taking scraps off the table in a strategic way instead of a principled stance on the matter, or one informed by any evidence one way or another about the desirability of such transactions.

It also could take the process out of the gutter insofar that it lifts the chill over independent broadcasters and those in the creative community who will no longer have to cower out of fear that they will be frozen out of the big vertically integrated players’ programming schedules, or denied access to essential distribution facilities, if they speak out against a deal like this one. Those who stood opposed to the Bell Astral deal jeopardized their own access to the schedule of what is already the second largest tv operator in Canada, and which would have been the largest if the deal had been consummated (see para 28).

This is what economists call the ‘monopsony problem’, where there are many sellers and very few buyers. This problem is acute enough already, with the ‘big four’ – Shaw, Bell, Rogers and Quebecor, in that order – already dominating 81 percent of the ‘total tv market’. That number would have grown to just under 90 percent, if Bell had its way.

The last point I want to address for now is the claim being bandied about that the CRTC’s decision to kill the Bell Astral deal reflects a new activist regulator under the stewardship of its new chair, J. P. Blais.  The claim seems to have first emerged in a Globe and Mail article by Steve Ladurantaye at the beginning of the hearings when Blais read aloud a series of public criticisms of the Bell Astral deal.

Since Thursday when the decision came down, the claim that the CRTC has become an activist commission with a consumer bent has gained a great deal of fuel. Michael Geist, writing in the Toronto Star, says that this ain’t your mom and dad’s old CRTC, but one that has put the consumer back in the drivers’ seat. A piece in the Globe and Mail by Steven Chase today makes the same case. Thursday night, and over at the National Post, Terrance Corcoran bemoaned the turn-of-events, seeing the CRTC as playing the populist card and pushing its activist agenda behind the “shadowy concept” of the public interest.

I have several reservations about this view. First, I am uncomfortable that most of the references are to consumers, with none to citizens and just a few to ‘the public’, and then in disparaging terms (Corcoran). These decisions are not just about cable and satellite bills (Globe & Mail); they are about citizens’ and the public’s access to the maximum range of entertainment, news and information sources possible. They are also about “the Public’s” ability to use these media, especially the internet, without having that use hedged about by restrictions and limits imposed by TMI giants bent on protecting their legacy television businesses and transforming the open internet into the pay-per model, where usage based billing and bandwidth caps run roughshod over citizens’ communication rights. This is about communication rights, democracy and pleasure, not just cable and satellite bills.

Lastly on this point, in contrast to seeing the CRTC as suddenly having been remade in a consumer activist mould by J. P. Blais, I think we need to entertain a more critical view.

In this view, as social and political theorists have long shown and discussed (see, for example, C. Wright Mills, The Power Elite), the room for significant changes and unexpected outcomes increases immensely when there is a split amongst elites. And in this case, that split was on full display, with Bell standing on one side arrayed against not just citizens and consumers wary of yet even more telecom-media-internet concentration, but the biggest players in the biz, indeed, almost all of the rest of the industry except Shaw, who sat on the sidelines.

Bell may be a behemoth, but pitted against the rest of the industry and the public, the CRTC had a massive opening through which to think outside the box. And it did, and make no mistake about it, this is a big decision. However, the real test will be whether that continues to be a trend when the industry once again closes ranks, as it so often does, or most of the key players involved do like Shaw did this time around: sit on their hands. Will the CRTC be as emboldened then to pursue “the people’s” interest? For that, we’ll have to wait and see.

Code Yellow: Threats to Freedom of Expression, Online and Off, in Canada

This morning the Huffington Post published an article I wrote for PEN Canada as part of its ‘Non-Speak Week’, “a string of events and exchanges exploring the state of freedom of expression here in Canada”, according to the group. The original version of the piece, with links, is reproduced below.

We reach certain points in time, what the critical media scholar Robert McChesney calls “critical junctures”, or that the sociologist and media historian Paul Starr calls “constitutive moments”. Now is one such moment, and choices and decisions made nowcould tilt the evolution of the network media ecology in Canada toward a more closed, surveilled and centralized regime instead of an open one that strives to put as much of the internet’s capabilities into as many people’s hands as possible. The latter approach maximizes the diversity of voices and is essential to any free press — digital, networked, or otherwise — and to the role of communications media in a democracy.

Threats to an open media and internet ecology, and thus to creative and expressive freedoms in Canada, are unlikely to arrive outfitted in jackboots. Instead, they will arise from the slow, cumulative outcome of decisions that will affect levels of media and internet concentration, internet surveillance for law enforcement and national security reasons, and a concerted push to turn internet service providers (ISPs) and digital intermediaries such as Google and Twitter into agents on behalf of the entertainment and software industries’ copyright maximalist agenda.

In terms of media and internet concentration, Canada already has some of the highest levels of concentration and cross-media ownership in the world. The ‘big four’ telecom-media-internet (TMI) giants – Bell, Shaw, Rogers and Quebecor Media Inc. – already control roughly half of the network media economy in Canada. This is set to get much worse if the CRTC gives Bell’s bid to take-over Astral Media the green light in a decision expected later this year.

The problem of media and internet concentration is crucial to freedom of expression for many reasons. First, large TMI conglomerates are often rickety enterprises that spend more to pay down the debt incurred by acquisitions and mergers than good journalism, investment in new technology and infrastructure, or supporting open media.

Second, these same entities have turned to soft tools of censorship such as usage-based billing and bandwidth caps to protect their investment in legacy media and which are transforming the user-centric internet into the pay-per Internet. Such measures constrain what we can and cannot do with our internet connections. They privilege incumbents’ own online video services while discriminating against other sources. Bandwidth caps are not unique to Canada, but the fact that they are near universal among the big players and set at very low levels with high prices relative to global standards, does set us apart from the rest of the world.

Lastly, a small number of massive integrated media and internet companies are more regulable than many entities of different stripes and sizes. In short, a few big firms make for juicy targets for those who see them as potential tools in their own efforts to push either a law and order agenda, as was the case last year with the Investigative Powers for the 21st Century Act (Bill C-51), or part of the arsenal of a strong copyright enforcement regime, as some sought but did not fully achieve with the Copyright Modernization Act passed earlier this year.

I think we need to push back against the tide. As part of my efforts to do so, over the past year I joined with the Digitally Mediated Surveillance research project and the New Transparency Project to create a video to discuss why internet surveillance and the Harper Government’s proposed lawful access bill specifically are bad for privacy, democracy, civil liberties and an open internet.

That bill died last Parliament and was to be reintroduced with the Government’s omnibus crime bill passed earlier this year. Its essential aim was to have ISPs and telecommunications providers retool their networks with far greater surveillance capabilities and to require telecoms providers, ISPs and search engines to disclose subscriber information, including name, address, IP address, and email addresses, to law enforcement officials without court oversight.

Fortunately, this aspect of the omnibus bill was stripped out at the last minute in the face of withering public criticism led by groups such as Open Media, dissent within the ranks of the Conservative Government, and even a broadside against it in one of Rick Mercer’s famous rants.  The victory is significant, but a similar bill sits in the wings waiting for an opportune moment to be reintroduced. Moreover, University of Ottawa law scholar Michael Geist observes that telecoms providers and ISPs already comply with over 90% of requests from law enforcement requests for information about their subscribers without a warrant.

As I said earlier, a few massive firms are more likely to be pliable entities than recalcitrant ones. This example shows that this is, in fact, the case. Such murky ties outside the formal rule of law do not bode well for freedom of expression in Canada, online or off. For an open network media ecology and citizen’s rights to autonomy and to express themselves freely to flourish, the collection, retention and disclosure of personal information between private media companies and the state should be minimized, not maximized.

The final factor in this trilogy of forces bearing down on an open internet is the copyright maximalist agenda. A strong version of this was visible earlier this year in the United States with the Stop Online Piracy Act, or SOPA. SOPA would have required: (1) ISPs to block access to ‘rogue websites’, (2) search engines to make such sites disappear from their results, (3) payment providers like Paypal and Visa to cut-off payments, (4) and advertisers to cut-off suspect sites from advertising placement, among other things.

The fundamental remaking of the Internet such activities contemplated unleashed a firestorm of protest, ultimately leading to a tactical withdrawal of SOPA. Yet as SOPA was being withdrawn in the US, copyright maximalists here in Canada were on a roll.

They deployed their hyperbolic rhetoric that carved up the world into good guys and bad guys, with repeated references to “wealth destroyers”,  “parasites”, “rogues” and “pirates” to make their case for why Canada needs strong digital locks, longer copyright protection terms, and for ISPs and search engines to step up to the plate on their behalf.

Copyright maximalists spurned claims that their agenda had anything to do with freedom of expression, but last year a United Nations’ report on internet and human rights argued exactly the opposite point of view:

“. . . [C]utting off users from Internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, [is] disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights” (p. 21).

Article 19, in case you didn’t know, sets out freedom of expression and opinion as a fundamental human right at the global level and calls on the 167 countries that are party to it to promote and protect such rights to the fullest extent possible.

In a powerful testament to the ability of “the Public” to influence arcane matters of policy and law, the copyright maximalist’s got only a fraction of what they wanted: digital locks yes, but no term extensions or requirement that ISPs, search engines and other digital intermediaries serve as tools on their behalf.

These examples suggest that when it comes to freedom of expression, there will be no smoking gun, just a slow tilt biasing the evolution of the telecom-media-Internet infrastructure in favour of greater control on behalf of incumbents, the state and copyright maximalists. For freedom of expression to flourish, we need to keep our eyes wide open to such efforts by stealth that seek to transform the network media ecology into one that is more closed, controlled and regulable.

Bell’s Bid to Take-Over Astral Encounters Mounting Opposition: Interview on the Lang and O’Leary Exchange

After a week since the CRTC’s stopped accepting interventions and a little under a month before hearings begin in Montreal, opposition to Bell Canada’s bid to take over Astral Media — the eighth largest media company in Canada, largest radio broadcaster and fourth biggest pay tv provider — is mounting. Some of this made a big splash in the past week as the Say No to Bell campaign backed by Quebecor, Cogeco and Eastlink was kicked into high gear.

However, while we may wonder about the motives behind the cable companies’ campaign to derail Bell’s take-over of Astral, they are not alone. Far from it, actually. Telus, too, has lined up solidly against the deal. The Independent Broadcasters Group — those without the shelter of the big four vertically integrated companies (Bell, Shaw, Rogers and Quebecor) that dominate Canada’s telecom, media and internet landscape — have also lined up serious concerns with the transaction.

The Canadian Media Producers Association filed a similar position paper but appears to think that its concerns can be taken care of with a little more money flowing from Bell to its members by upping the scale of the ‘tangible benefits’ that Bell has proposed (Interventions can be found on the CRTC’s website here). The research consultancy, Analysis, also put out a study showing that Canada has amongst the highest levels of pay television concentration and vertical integration amongst the G8 countries.

Whereas Bell has argued that its acquisition of Astral does not trigger any need for review, based on its market share and the CRTC’s guidelines, many of the submissions beg to differ. Apparently, one of the things missing from Bell’s low-ball calculation of its own market share is that it didn’t count the audiences of pay tv services that Astral jointly owns with others, such as Viewer’s Choice Canada Inc. Historia & Séries+, Teletoon. That’s one way to do the numbers and if lazy journos keep repeating them, well maybe they’ll be harder to dislodge as “the truth”.

Bell also argues that the CRTC’s vertical integration rules put into affect last fall eliminates any concerns that it will not offer access to its distribution facilities and programming on non-discriminatory terms. Well, no. The companies behind the Say Not to Bell campaign, Telus, as well as the Independent Broadcasters Group all argue that the new rules are, at best, a “work in progress”, and that experience to date falls far short of assuming the Pollyannaish portrait of the world painted by Bell. Access to content is a thicket of discontent. So too should be the fact that while Bell lifts its bandwidth caps for its own online video services, it applies them to rival services such as Netflix. This is the pay-per internet for everybody else’s services except those within the Bell corporate-fold. That fold will become much bigger if Bell does acquire Astral.

As readers of this blog might recall from a post last week, I prepared a study for the Public Interest Advocacy Centre, Consumers’ Association of Canada, Canada Without Poverty, and Council of Senior Citizens’ Organizations of British Columbia’s submission to the CRTC opposing the Bell/Astral deal. My basic argument is that the CRTC has ample grounds to seriously review this transaction and perhaps to put the kibash on some of its central aspects altogether.

Most importantly, with respect to specialty and pay tv services that are the crown jewel in the Astral Media crown, the transaction would give Bell a market share of over 42% and thus at the high end of even the CRTC’s own relatively weak standards. The deal is a boundary pusher and the CRTC ought to draw a line in the sand. The fact that the deal would further stitch up the control of large vertically integrated companies across the telecom-media-internet landscape, and at rates that are extremely high by both historical standards in Canada and relative to global norms, also provides a strong basis from which to dash this transaction.

It is good to see not just the opposition to this transaction gaining momentum but also that PIAC’s intervention and the study I wrote for them have gained a fare amount of attention. Yesterday, the CBC’s website gave some great coverage to the study. Later in the day, the Lang and O’Leary did as well and invited me in to chat about it with them. The 7 minute video clip from the Lang and O’Leary Exchange is below:

Communication & Media Return to the Centre of the Sociological Imagination: An Interview with Jeffrey Pooley

Recently I read Jeff Pooley and John Durham Peters’ chapter, “Media and Communications”, in the new Wiley-Blackwell Companion to Sociology. It’s a really great piece of work that is sweeping in its synthesis and provocative in some of its points and the criticism it offers to those in communication and media studies.

For those who want a tightly knit overview of how the field has developed since the inception of contemporary sociology in the late-19th century, and some of its main currents and preoccupations today, this is a great place to start. As I have done in the past with communication and media scholars Christian Fuchs, Michael Stamm and Robin Mansell, I got in touch with Jeff, who is associate professor of Media & Communication at Muhlenberg College in Allentown, PA, to see if he’d be interested in talking about the Media and Communications chapter with an eye to publishing our talk on my blog.

I am delighted that he agreed. Our conversation follows.

Dwayne: I’ve been reading your work for the last, I guess, five years, and Peters’ since the late-1990s. We met at the International Communication Association (ICA) conference in Chicago a few years back (2009). In the session, Sue Curry Jansen and Michael Schudson, amongst a few others, made compelling arguments about the need to revisit how Walter Lippmann has been set up in our field, especially by critics such as Noam Chomsky and Edward Herman, as a bogeyman, an architect and advocate of “manufacturing consent” as a necessary mode of governing in modern democracies.

Jansen and Schudson offered a compelling argument that Lippmann was nothing of the sort, at least not until later in his life, but rather a public intellectual who charted and criticized the use of propaganda techniques that had been forged in wartime (WWI) in the years after the war. That’s an aside, but the fact that you helped organize the panel on Lippmann’s legacy at ICA reflects well your position as an intellectual historian of the field and member of a new group of scholars revisiting, reviving and rewriting media history. Can you give us a quick sense of how you arrived where you’re at, where you teach and perhaps some of why you see media history as being so important?

Jeff: Thanks for setting this up, Dwayne. That was a great panel at ICA in 2009, packed to the gills. Schudson’s paper was later published in IJOC, as was Kurt and Gladys Lang’s contribution. Jansen’s paper on Lippmann was an instant classic, since published in Communication and Critical/Cultural Studies. Together with her brilliant 2008 chapter on Lippmann as “Straw Man of Communication Research” and a forthcoming book (Walter Lippmann) from Peter Lang, Jansen has exposed the Lippmann we all came to know through James Carey, Chomsky, and Stuart Ewen as an utter caricature.

She happens to be my colleague and mentor at Muhlenberg College–and the reason I joined the faculty in the first place. When I applied to Muhlenberg a decade ago, I was a Ph.D. candidate with a half-finished dissertation on the origins of the powerful-to-limited-effects storyline of American mass communication research. I had contacted Jansen when I had come across an abstract for a paper on the field’s history–a paper, I believe, that she never delivered for some reason or another. She invited me to talk to one of her classes. When a job listing at the Allentown, Pennsylvania college opened up, I canceled the class visit and wrote Jansen that I was planning to apply.

Jansen’s presence there, my own interest in teaching at a liberal arts college, and the department’s unusual and longstanding critical orientation (with an explicit social-justice mission) all attracted me to the place. Jansen is not well-known in the wider field, mostly because of her profound humility and resistance to self-promotion of any kind. As anyone who has read her scholarship (or her astonishingly rich book reviews) knows, she is among the most thoughtful and learned figures in our field. She deserves to be much more widely read.

My own interest in the history of media research grew out of undergraduate activism. I was puzzled by student apathy at Harvard, where protest gatherings during my time there in the mid- to late 1990s would attract only a handful of students. I became interested in a loose tradition of work–including the chastened writings of Western Marxists like Gramsci and Lukacs that argued, more or less, that media and culture help snuff out the revolutionary zeal of the masses.

As a graduate student at Columbia I wanted to write a history of this kind of thinking–a history, really, of an argument about culture and quiescence. So far so good. But when I started to read the published work on the field’s history, I got caught up in that literature. Many existing histories struck me then as Whiggish, doing legitimacy work for a vulnerable discipline. In the same vein, many histories were narrated to establish originality, to discredit a contemporary disputant, or to mine for a usable past. I quickly set aside the history of leftist media thought.

My project shifted to looking at the history of the field’s history–the history, in a way, of its origin myths. I have been writing on these topics ever since. Of course the chapter with John Durham Peters–which is really a modest update of an earlier version that he authored–ties in with this interest.

To more directly answer your question, I do think we need more and better work on the history of media research. For one thing, there really is buried treasure in the archives that could inform and challenge current research. (As it is, there is a lot of wheel reinvention going on, just because we’re so ahistorical as a field.) More important to me is the work that disciplinary memory is or isn’t doing in graduate pedgagogy and the field’s self-understanding.

As I’ve argued in a pair of chapters (here and here) and a more recent polemic, the U.S. field of communication research depends on a set of remembered stories, as well as amnesia regarding its messy institutional roots. These stories are often worse than misleading. And the institutional forgetting prevents us from confronting the institutional sources of intellectual poverty in the field, to paraphrase the title of a classic 1986 Peters article. These are some of the reasons I’ve recently started the Project on the History of Communication Research .

Dwayne: The chapter you and John wrote opens up with this amazing sweeping synthesis of the place of communication in the last one hundred and twenty- five years or so of sociology. You tick off names from the sociological cannon–Tonnies, Tarde, Durkheim, Weber, Marx, Mead–and contemporary sociologist such as Pierre Bourdieu, Jurgen Habermas, Manuel Castells and Barry Wellman. The argument is bold: “sociology has been the study of communication” or at least made that a primary axis of social organization (and integration) — by another name. You and Peters, however, argue that sociologists now mention communication only in passing, if at all. Likewise, communication and media scholars have demoted sociology. You seem to be arguing that this ought not be the case, while also, at the end of the chapter, suggesting that the recent work of Castells and Wellman may be helping to restore the historical connection between the two fields: sociology as well as communication and media studies. Q2. Does that capture reasonably well some of your ideas and arguments?

Jeff: Yes, we do argue that mass communication was bound up in, and often central to, sociologists’ attempts to make sense of modern life, from the beginning of “classical” sociology in the late 19th century. And sociologists were among the most numerous, and arguably the most important, investigators of media on through the post-World War II years. Among American sociologists, John’s great reader (edited with Pete Simonson) makes this case directly, through excerpts: Mass Communication and American Social Thought: Key Texts, 1919–1968.

In the American case at least, sociologists more or less abandoned the study of mass communication in the 1960s. The whole thing is complicated, but I think two overlapping factors do most of the explanatory heavy lifting.

The first is a major change in foundation and government funding for social science, and the second is new mass comm doctoral programs in journalism schools around the same time. In the early Cold War, the dollars for social science came mostly from foundations like Ford and military agencies like the Office of Naval Research. In this first funding system, the focus was on problem-based, interdisciplinary research teams, and decisions about who and what to fund were greatly influenced by a few key “brokers” like Paul Lazarsfeld and Herbert Simon. After Sputnik in 1957–as Hunter Heyck has shown in a brilliant study–a second pattern of social science funding emerged, overlapping with the first for a few years.

This second system was characterized by civilian agencies like the National Science Foundation and, notable for communication research especially, the National Institute of Mental Health. By the early 1960s the whole interdiscipilnary “behavioral sciences” milieu that Ford and the military agencies had helped to incubate had weakened. And the newer system favored medical sociology–drawing some would-be media sociologists like Eliot Freidson away–while in the realm of media more often funding psychological work.

So when Bernard Berelson claimed the field was “withering away” in 1959 he was, in a way, correct: his world of cross-disciplinary media research was dissipating. But waiting in the wings were Wilbur Schramm and his fellow J-school colonizers. Schramm, the consummate academic entrepreneur, had already in the late 1940s started a doctoral program at Iowa in mass communication, within its J-school. With help from Bleyer children—students of Wisconsin journalism educator Willard Bleyer–Schramm’s J-school model spread around state universities in the Midwest in the 1950s. Thus, even as Berelson’s field was dying Schramm’s J-school alternative was thriving.

And it was much more psychological in its intellectual orientation. The result of all this was that U.S. sociologists stopped studying media, or if they continued they were drawn into student-rich, high-paying communication programs. There were, of course, bursts of sociology of media, like the newsroom sociology of the mid- to late 1970s and early 1980s.

It has only been in the last 10 years, however, with the rise of the internet and digital culture, that sociologists returned in great numbers to the study of media. One index of the new interest is the fact that, in 2002, the “Sociology and Computers” section of the American Sociological Association changed its name (and focus) to “Communication and Information Technologies“. Elihu Katz and I have told this fall-and-rise story at greater length in a 2008 article.

Q3 (Dwayne) Why Wellman and Castells’ work? To me, there seems to be two reasons set out in your chapter. First, their emphasis on communication as the sinews of social interaction recovers a sense of Cooley and other late–19th and early 20th scholars. Second, I may be wrong, but early in the chapter you say that the study of communication has long been a garbled mixture of descriptive and normative views, from Tonnies, Tarde and Dewey in the 19th and early–20th centuries to Habermas more recently.

In that passage I sense something of a lament, or at least a sense that the normative dimension has swamped the descriptive elements. Is there a lament for the commingling of descriptive and value-laden judgments? Are Wellman and Castell singled out not just because they bring communication back home to sociology, but also because they somehow get past this “facts” and “norms” problem, or get the balance better?

Jeff: That’s a really interesting question. You are right, first off, that the late-essay appearance of Castells and Wellman is a narrative device–a tie-back to the opening discussion of Cooley, Dewey and others. In different ways, Castells and Wellman look to social networks (online or otherwise) as constitutive of social order; hence the echo of Cooley’s communicative sinew. It’s also interesting that both really come out of urban sociology, so that their work, like early sociologists’, places media in a wider social context.

Still, Wellman and Castells are merely prominent stand-ins for a happy trend: the return of sociologists to media questions. Why now? The main reason, I think, is the unmistakeable, society-wide disruption brought on by the internet and digital culture.

As to the blending of fact and norm in the early works, John and I do claim that sociologists (especially in the U.S.) have tended to reassure, to minimize fears of media power. (Some of the figures we write about stress, of course, just the opposite, that media power operates to prop up an unjust status quo.) Perhaps you mean that some sociologists–Cooley, Dewey, Katz, Carey and, in a much more complicated way, Habermas–substitute their hopes for hard-nosed description. That’s there in the chapter, for sure. Pete Simonson has a great paper [pdf] on what he calls “communication hope” in that tradition.

Still, we did not intend for Wellman and Castells to count as late arrivals with a refreshing focus on description, over normative claims. I hesitate to speak for John, but my own view is that a neat separation isn’t possible, and that claims for value-freedom end up masking smuggled-in normative agendas. Castells is far more critical than the more-or-less upbeat Wellman, but both scholars mix the normative and the descriptive. What is your read on the two?

Q4 (Dwayne) Early in the synthesis that opens the chapter you and Peters also point to some of the trans-Atlantic/trans-national links between German political economy and evolutionary philosophy, on one hand, and the communication-centric approach to sociology, on the other, that developed in the U.S., again in the late–19th to the early–20th centuries mostly, by the likes of Cooley, Dewey, Mead, etc., mostly with the ladder [LATTER] learning at the feet of German and French sociologists such as George Simmel, Gabriel Tarde, and so on. Could you tell me a bit more, first, about the links between political economy and American sociology that you have in mind, and second about the transnational links?

Jeff: Our mention of the German connection is just to repeat the fact that many prominent turn-of-the-century U.S. sociologists–Park, Small, Mead, Ellwood, etc–studied in Germany. The Germans really established the modern research university, and the earliest US examples–Clark, Chicago, Johns Hopkins–modeled themselves on the German.

Up until very late in the nineteenth century, though, sociology didn’t really have a distinct identity, in Germany or the US. “Political economy” was, in effect, a catch-all term for the pre-discipinary social sciences as a whole. So when we write about political economy here, we really have in mind something like “social science”– and in this period the direction of influence was from Europe to the U.S.

Did you have in mind something closer to what gets called “political economy” today, or the British tradition that Marx is critiquing? Certainly the influence of German and French scholars on U.S. media sociologists has been completely understudied.

Q5 (Dwayne) I think this is an interesting phenomenon that has not been explored enough. The only communication scholar that I know of who has looked at Anglo-German ties with much depth is the late-Hanno Hardt (here and here). I also see this as the tip of a potentially bigger iceberg related to what I would call the “methodological nationalism” (Beck) that frames the intellectual history of our field and our ‘objects of analysis’ that, if properly addressed, could lead to a deeper and more significant global frame of reference for both of these considerations. Others have talked about a broader trans-Atlantic intellectual culture that took shape in the same time that you are talking about.

I’m not well-versed in this literature, but D.T. Rodgers’ book, Atlantic Crossings: Social Politics in the Progressive Age (1998), makes a similar case for the social sciences generally. Alexander Badenoch and Andreas Fickers in Materializing Europe: Transnational Infrastructures and the Project of Europe (2010), are also trying to write media history in a more trans-European framework. Your chapter opens some interesting possibilities along these lines in the first couple of pages when you discuss the work of Franklin Giddings in the same breadth as you refer to John Dewey, George Herbert Mead, etc..

I agree with this move because Giddings, like the others, emphasizes, as you note in the chapter, that the movement of goods and ideas are the lifeblood of modern society, and serve as forms of integration amidst a backdrop of increasingly differentiated and complex modern societies. I wonder if your introduction of Giddings might be usefully developed further in relation to your discussion about the cross-pollenization of ideas between U.S. and European sociologists? In particular, I’m thinking that there might be even deeper links between sociology and communication on both sides of the Atlantic, and between them and a broader set of developments that led to the formalization of political science, international relations and law, also in the late–19th and early 20th centuries?

I am not an expert on Giddings but in previous work I have found him to be part of another group of American political sociologists including Woodrow Wilson (the subsequent two-term president, 1912 “ 1920) and Paul Reinsh who served as founders of political science and international relations in the U.S. These scholars interacted a lot with French (Jean Luc Renaud, German, Dutch (Tobias Asser) and a few other European scholars in relation to global politics, multilateral institutions, international law, etc. They saw the modern world much in the same way that you describe European and American sociologists as doing: as a system tied together through flows and structures built out of a lattice-work of technologies, law, money, power and public opinion.

Like the scholars you focus on, this latter group (Wilson, Reinsch, Giddings) also placed great emphasis on, as you and Peters call them, the “material’ (technologies, institutions, etc.) and “symbolic’ aspects of communication, and the need to create mechanisms fit for the scale, pace and complexity of the modern world. Giddings expresses this view, for instance, in Democracy and Empire (1900), while Wilson put communication and public opinion alongside economic and technological integration and the rule of law as the basis of the “modern world system’ (instead of such things being just idealistic patter hiding the ambitions of ascendant U.S. economic, military and foreign policy power). Sorry, this is a rather long wind up.

My point is that while you and John peel back the veil, you don’t really develop the “transnational/global’ aspect as fully as you might. As a result, we get a kind of thin trans-Atlantic culture and when it comes to media, while questions about globalization emerge at the end of your chapter as a really just a recent phenomenon. Do you see what I’m getting at, or am I reading something into these ties I shouldn’t be?

I also wonder if part of the reason for this outcome might be due to the snug coupling between the media and the nation-state that has typified our field? Your chapter draws this out in the section on “the national frame” (pp. 407–409). And you do so by pointing to the work of Benedict Anderson, who sees the “imagined community’ of the nation-state as emerging co- terminously with the rise of the “big five’ modern media since the 15th century: newspapers, magazines, movies, radio and television. You also point to how broadcasting was, and in some developing areas of the world, still is, tied to a project of fostering national integration. The media/nation- state coupling, as you and Peters note, was clearly advertised by the names of radio broadcasters since the 1920s: e.g. Canadian Broadcasting Corporation (CBC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting System (ABC), etc. Clearly the media/nation-state coupling exists, but I wonder if you obscure the more global, or transnational dimensions of (a) media history proper and (b) the intellectual cross-fertilization that shaped the development of communication and media studies as a field of inquiry by over-inflating that aspect? Any thoughts?

Jeff: That is a huge and important question. Yes, the transnational gets short shrift in the chapter, both–as you say–in the history of media proper and in the field’s intellectual history. It is bracingly true that disciplinary histories have overwhelmingly focused on self-incapsulated national traditions. Dave Park and I, in a forthcoming survey of work on the field’s history, found this to be overwhelmingly the case. (We also found stunning North-South imbalances in the published work.)

Some of this reflects the insularity of national traditions, but a great deal derives from the limits of the historians drafting the history–including, in this case, John and me. The only pre-World War II literature I know reasonably well involving trans-Atlantic intellectual exchange is on public opinion and communication, which you also allude to.

From the 1870s on, systematic study of public opinion and its relation to social order was underway in Germany (e.g., Schaffle), France (e.g., Tarde, Le Bon), Britain (e.g., Bryce, Wallas), and by the early 1900s American figures (like Wilson, but also Lowell, Ross, Lippmann) were part of a trans-Atlantic conversation of just the kind you describe. I wasn’t aware of Giddings’ role in this discussion–so great, though, that books like Democracy and Empire are now in the public domain–but instead his fascinating journey through the social sciences as they were differentiating: an erstwhile economist in the late 19th century, then on to a primary identity as a highly influential sociologist at Columbia University through the 1920s. Robert Bannister’s Sociology and Scientism has the only extended treatment of Giddings that I know of–though this great recent article by Cristobal Young on the relationship of sociology and economics during this period uses Giddings as a case in point.

But you make a really good point about the chapter, that it over-emphasizes the national at the expense of cross-national intellectual exchange. No doubt you’re right too about media history proper. This stuff–the translational–is so damn hard to do well. Plus the sociology of academic translation and exchange is fascinating in its own right. I’d love to hear more about the political science/IR trans-Atlantic ferment you’ve touched on above.

The Significant Impact of the Bell Astral Deal on Media & Internet Concentration in Canada

Today was a good day. An unbelievably frantic one, but a good day nonetheless. I’ve been pouring blood, sweat and tears into a submission to the CRTC’s hearings on Bell’s bid to buy Astral Media to be held in Montreal next month. Today was the deadline for submissions to the CRTC.

My submission is part of an intervention by the Public Interest Advocacy Centre, Consumers’ Association of Canada, Canada Without Poverty, and Council of Senior Citizens’ Organizations of British Columbia opposing the Bell/Astral deal. The documents were filed with the CRTC today.  All submissions to the CRTC can be found on its website here.  

Bell claims in its application to the CRTC that a combined Bell/Astral “will not exercise market dominance in any sector of the broadcasting industry” (emphasis added, Bell, Reply, A14c). My submission on behalf of PIAC et. al. argues otherwise and that the transaction deserves very close scrutiny, and that key elements of it should be stopped dead in their tracks.

The key findings in the submission can be summarized as follows:

  1. a successful bid by Bell to acquire Astral would catapult it to the top of the ranks in radio, with revenues of $500 million, 106 radio stations, just under 29 percent of the market – twice the size of its nearest competitors: Rogers, CBC and Shaw (Corus). Notwithstanding such an outcome, this would not trigger regulatory intervention under the CRTC’s new ownership rules or its Common Ownership Policy. Consolidation in radio increased in the early 2000s before drifting downwards in recent years. Radio is unconcentrated by conventional measures. The Bell/Astral deal, however, would reverse the tide and result in the highest levels of concentration in the past twenty-five years
  2. there would be no direct impact on traditional television broadcasting.
  3. in the specialty and pay television market, Bell’s market share would rise sharply from 28% in 2011 to over 42%. This gives the CRTC ample grounds to intervene.
  4. across the total television universe, Bell’s position would be reinforced, rising sharply from 27% in 2011 to 35%. This, too, provides grounds for intervention.
  5. television markets worldwide tend to be more concentrated than often assumed. Canada is, at best, a middle-of-the-road performer on this measure, and often at the high-end of the scale. While concentration is slowly declining elsewhere, in Canada it is rising sharply; the Bell – Astral deal will compound the trend.
  6. Canada currently has the second highest level of cross media ownership and vertical integration among thirty-two countries studied by researchers in the International Media Concentration Research Project (Columbia University). It will be the highest amongst these countries if the CRTC does not pull the plug on the Bell — Astral deal.

The following figure shows the story.

Crossmedia Ownership/Vertical Integration Ratios — Canada # 1 amongst 32 Countries Surveyed Worldwide

Source: International Media Concentration Research Project with updates for 2011-2012 for Canada by author

Conclusions Drawn

Ultimately, the submission concludes:

  1. The CRTC probably has no choice but to give a pass to Bell with respect to its take-over of Astral’s radio assets. Bell meets the Commission’s requirements under the Common Ownership Policy, or at least will once it divests itself of ten stations in Vancouver, Calgary, Winnipeg, Toronto and Ottawa-Gatineau. This is unfortunate because, until now, radio has been one of the least concentrated and most diverse media in the country. The Bell-Astral deal will increase concentration significantly, whereas in most countries covered by the IMCR study, it is declining.
  2. Television is a different matter. There will be no direct effects on broadcast television. There will, however, be large and significant effects on the specialty and pay television and “total television” markets. Concentration levels in all of these areas are already very high by the CRTC’s own standards, historical norms, global standards and by CR and HHI standards used to measure media concentration in this submission.
  3. The impact will be most extreme in the specialty and pay tv market, where Bell will increase its share of the market from 26.6% to 42.2% — well in excess of every other major player in the market: Shaw (32.3%), Rogers (10.7%), CBC (4.1%) and QMI (3.2%). Together, these five companies will control 92.5% of this market. Out of the eighteen countries for which adequate data is available, Canada currently is the 11th most concentrated market. If the Bell – Astral deal is approved, we’ll fall down another notch to 12th place.
  4. The trend is similar with respect to the “total television” market, but not quite as pronounced. On the basis of the CR, it is already more concentrated than it has ever been in the last twenty-five years. In terms of the HHI, things could soon be right back where they were in 1984, when the HHI score was 2307.5 and the VCR all the rage. By my calculation, the HHI score is presently 1918, up significantly from three years earlier when it was 1,481. Should the Bell deal go through, it will have 35% of the market and the HHI score will be higher still at 2308.8 – one point more than twenty five years ago. The CRTC’s own concentration rules permit it to intervene actively in the face of such levels, and it should.
  5. Lastly, Canada already has the second highest levels of cross-media ownership consolidation and vertical integration in the 32 countries examined by the IMCR project. We don’t need to be first. The CRTC ought to oppose this venture on this ground alone, although it is unclear whether it even as the power, let alone the will, to do so. Concentration within and across the network media industries –  demonstrably and empirically – has been extremely high, and is set to get higher yet.

It is time to reverse the tide.

Weak Links and Wikileaks: How Control of Critical Internet Resources and Social Media Companies’ Business Models Undermine the Networked Free Press

I’ve written several times on Wikileaks over the past year-and-half. In this piece I draw together and update my thoughts on Wikileaks in light of recent developments, with a focus on how concentration of ownership and control over critical internet resources (internet access, domain name registries, webhosting sites, payment services, etc.) and the business models of social media companies such as Twitter compromise freedom of expression and the press on the Internet, with Wikileaks serving to illustrate the point.

What follows is a first draft of a chapter that I have written for a forthcoming book edited by Benedetta Brevini, Arne Hintz and Patrick McCurdy. Beyond Wikileaks: Implications for the Future of Communications, Journalism and Society. I would be delighted to hear any constructive comments and criticisms you might have.

In his seminal piece on Wikileaks, Yochai Benkler (2011) makes a compelling case for why Wikileaks is a vital element of the networked fourth estate, and why we should view its harsh treatment by the U.S. government as a threat to the free press. As he says, the case embodies a struggle for the soul of the internet, a battle that is being waged through both legal and extralegal means, with major corporate actors – Apple, Amazon, eBay (Paypal), Bank of America (Visa), Mastercard, etc. – using their control over critical internet resources to lean in heavily on the side of the state and against Wikileaks.

This piece reviews Benkler’s case for seeing Wikileaks as an crucial element of the networked free press, adds a few details to it, then presents an important new element to the story: the role that Twitter, the social media site, has played in what I will call the Twitter – Wikileaks cases. In contrast to the pliant commercial interests that Benkler discusses, Twitter fought hard in a series of legal cases during the last year-and-a-half to avoid having to turn over subscriber account information for several people of interest to the U.S. Department of Justice’s ongoing Wikileaks investigation: Birgitta Jónsdóttir, an Icelandic MP and co-producer of the Collateral Murder video whose distribution over the internet by Wikileaks put it, and her, on a collision course with the U.S. to begin with, Wikileaks’ volunteer and Tor developer, Jacob Applebaum, and the Dutch hacktivist Rop Gongrijp.

The DoJ’s “secret orders” raise urgent questions about state secrets and transparency, the rule of law, internet users’ communication rights, and the role of commercial entities that control critical internet resources. The Twitter – Wikileaks cases also cut to the heart of journalism in light of how journalists routinely use social media such as Twitter and Facebook, but also search engines and internet access services, to access sources, share information, and generally to create and circulate the news.

Wikileaks and the Emergence of Next Generation Internet Controls

Information filtering, blocking and censorship have been the hallmark of China’s model of the internet since the 1990s. Now, however, we are at critical juncture in the evolution of the Internet, with the United States government’s anti-Wikileaks campaign showcasing how such methods are being augmented by a wide range of legal and extra-legal methods in capitalist democracies. Indeed, governments the world over now rely on multidimensional approaches that use technical tools to filter and block access to certain kinds of content while normalizing internet control through legislation and by out-sourcing or privatizing such controls to commercial internet companies (Deibert & Rohozinski, 2011, pp. 4-7). Among other things, the Wikileaks case shows that such actors are often all-too-willing to serve the state on bended knees, albeit with some important exceptions to the rule as the Twitter – Wikileaks cases discussed later in this chapter illustrate.

Three intertwined tendencies are stoking the shift to a more controlled and regulable internet. First, the concentration of ownership and control over critical internet resources is increasing: incumbent cable and telecom firms’ dominate internet access, while a few internet giants do the same with respect to search (Google), social media platforms (Facebook, Twitter), over-the-top services (Apple, Netflix), webhosting and data storage sites (Amazon) and payment services (Visa, Master Card, Paypal), among others. Simply put, more concentrated media are more easily regulable than many players operating in a more heterogeneous environment. Second, the media and entertainment industries have scored victories in Australia, UK, NZ, US, Taiwan, South Korea, France and a handful of other countries for three-strikes rules that require Internet Service Providers (ISPs) to cut-off internet users who repeatedly run afoul of copyright laws. A 2011 UN report condemned these measures as disproportionate and at odds with the internet’s status under the right to communication set out in Article 19 of the Universal Declaration of Human Rights (1948), but they remain operative nonetheless (La Rue, 2011). Lastly, the internet is being steadily integrated into national security and military doctrines, with thirty or so countries, notably the US, Russia and China, leading the push (U.S. Congressional Research Service, 2004). The U.S. Department of Defense’s revised “information operations” doctrine in 2003, for instance, defines the internet (cyberspace) as the fifth frontier of warfare, after land, sea, air and space (United States, Department of Defense, 2003). National security and law enforcement interests are also central in new laws currently being considered in the US (CISPA), Canada (Bill C-30) and the UK (Communications Data Bill).

These trends are increasing the pressure to turn Internet Service Providers (ISPs) and digital intermediaries into gate-keepers working on behalf of other interests, whether of the copyright industries or law enforcement and national security. This drift of events is already bending the relatively open internet, with its decentralized architecture pushing control to the ends of the network and into users’ hands, into a more closed and controlled model. Such trends are not new, but they are becoming more intense and firmly entrenched in authoritarian countries and liberal capitalist democracies alike. This is the big context within which the anti-Wikileaks campaign led by the U.S. government has unfolded.

Wikileaks and the Networked Free Press

There are counter-currents to these trends as well, and one of those is the rise of Wikileaks in the heart of the networked free press, just at a time when the press is struggling to find a sturdy footing in the internet-centric media ecology. While it is common to bemoan the crisis of journalism, Benkler (2011) strikes a cautiously optimistic note, laying the blame for the ongoing turmoil among traditional news outlets on their own self-inflicted wounds that have festered since the 1980s. The rise of the internet and the changing technological and economic basis of the media magnifies these problems, he argues, but the internet is not responsible for them. In fact, nascent forms of non-profit, crowd-sourced and investigative journalism may be improving the quality of journalism.

Wikileaks is part and parcel of these trends. In the events that put it on a collision course with the U.S. government, the whistle-blowing site burnished its journalistic credentials by working hand-in-glove, at least after the “collateral murder” video, with The Guardian, the New York TimesDer SpeigelLe Monde and El Pais to select, edit and publish the Afghan and Iraq war logs and embassy cables. By cooperating with respected journalistic organizations, Wikileaks material was selected, edited and published according to professional news values rather than driven solely by the logic of hactivism or being an indiscriminate and irresponsible dump of sensitive state secrets into the public domain. The collaboration between traditional news outlets and Wikileaks also demonstrated that gaining access to large audiences in a cluttered media environment still requires ‘big media’. Altogether, these efforts set the global news agenda four times in 2010. For its efforts, Wikileaks chalked up a bevy of presitgious awards for its significant contributions to access to information, transparency and journalism, adding to the long list of honours that it had already won from press and human rights organizations, including from British-based Index on Censorship, Amnesty International and Time Magazine, among many others, since its inception (see Wikileaks Press, nd).

Interestingly, while Wikileaks had been offering journalists free access to the war logs and embassy cables for some time, it was only after it offered exclusive national rights to The Guardian, New York Times, and other major newspapers around the world that journalists showed much of an interest. Rights, money, and market power are still important lures, and are cornerstones of market-based media, with or without the internet – although it is important that Wikileaks certainly does not follow the conventional commercial model, and offers an alternative to it.

The more important point for now, however, is that investigative journalism is not the exclusive preserve of the traditional press, but it is the signature feature of what Wikileaks does. That the interjection of Wikileaks into the journalistic process led to outcomes that are probably better than the ‘good ole days’ is underscored by the fact that while the New York Times consulted with the Obama Administration before publishing the war logs and diplomatic cables, it did not withhold the material for a year. Indeed, this is a big and important difference from its behaviour in 2005 when, at the behest of the Bush Administration, the New York Times delayed James Risen and Eric Lichtblau’s (2006) expose of unauthorized, secret wiretaps conducted by the National Security Agency in cooperation with AT&T, Verizon and almost all of the other major telecom-ISPs in the U.S. (Calame, 2006). The war logs and embassy cables stories likely became headline news in 2010 faster than would otherwise have been the case because of Wikileaks role in these events, and its strategy of playing news organizations’ competitive commercial interests off of one another. Moreover, with little need to maintain good standing with the centres of political, military and corporate power, Wikileaks never assumed levels of deference similar to the New York Times and other established news sources.

All-in-all, Wikileaks is emblematic of a broader set of changes that, once the dust settles, will likely stabilize around a new model of the networked fourth estate, an assemblage of elements consisting of (1) a core group of strong traditional media companies; (2) many small commercial media (Huffington Post, the Tyee, Drudge Report, Global Journalist, etc.), (3) non-profit media (WikiLeaks, Wikipedia), (4) partisan media outlets (Rabble.ca, Daily Kos, TalkingPointsMemo), (5) hybrids that mix features of all the others and (6) networked individuals (Benkler, 2009). The fact that WikiLeaks is so central to these developments, and so solidly at one with journalistic and free press traditions, helps to explain why neither it nor any of the newspaper organizations it partnered with have faced direct efforts by the U.S. to suppress the publication of WikiLeaks’ documents (Benkler, 2011). If the story ended here, it would be a happy one, a triumph of a plucky, determined watchdog willing to take on the powers-that-be, without fear or favour, a testimony to the power of the internet to contribute to freedom of expression, the free press and the public’s right to know – in other words, democracy.

Using Ownership and Control of Critical Internet Resources to Cripple Wikileaks

Unfortunately, however, the story does not end there. The problem, as Benkler (2011) states, is that what the U.S. was not able to obtain by legal measures, it gained with remarkable ease from private corporations and market forces. Thus, buckling under the slightest of pressure, Amazon banished Wikileaks’ content from its servers the same day (December 1, 2010) that Senator and Senate Committee on Homeland Security and Governmental Affairs Chair, Joe Lieberman (2010), called on any “company or organization that is hosting Wikeleaks to immediately terminate its relationship with them”. Wikileaks quickly found a new home at webserver firm OVH in France but lost access to those resources after France’s Industry Minister warned companies on December 4 that there would be “consequences” for helping keep Wikileaks online. A day later, the Swedish-based Pirate Party stepped in to host the “cablegate” directory after they were taken off line in France and the US.

Yet, Wikileaks’ troubles didn’t end there because just a day before it was kicked out of France, the U.S. company everyDNS delisted it from its domain name registry. As a result, Internet users who typed wikileaks.org into their browser or clicked on links pointing to that domain came up with a page indicating that the site was no longer available (Benkler, 2011; Arthur, 2011). The Swedish DNS provider, Switch, faced similar pressure, but refused to buckle. It continues to maintain the WikiLeaks.ch address that Internet users still use to access the site, but has faced a barrage of Distributed Denial of Service (DDoS) attacks for doing so.

As Amazon, OVD and everyDNS took out part of WikiLeaks technical infrastructure, several other companies moved into to disable is financial underpinnings. Over the course of four days, Paypal (eBay) (December 4), MasterCard and the Swiss Postal Office’s PostFinance (December 6), and Visa (December 7) suspended payment services for donors to the site. Two weeks later, Apple removed a Wikileaks app from the iTunes store (Apple removes Wikileaks, 2010). Thus, within a remarkably short period of time, a range of private actors cut-off Wikileaks’ access to critical internet resources. The actions did not kill the organization, but the financial blockade did contribute mightily to the fact that Wikileaks funding plummeted by an estimated 95 percent (Wikileaks, 2011).

Privacy Rights Online and Internet Companies’ Business Models: Weak Foundations for the Networked Fourth Estate and Communication Rights

One important entity has stood outside this state-corporate triste on the outskirts of the law: Twitter. Indeed, it has stood alone among big American corporate internet media brands in refusing to assist the United States’ anti-Wikileaks campaign. Faced with a court order to secretly disclose subscriber information for three of its users, it said no.

In December 2010, at the same times as Wikileaks was being cut-off from critical internet resources, the US Department of Justice demanded that Twitter turn over subscriber account information for Birgitta Jónsdóttir, Jacob Applebaum and Rop Gongrijp as part of its ongoing Wikileaks investigation. The information sought was not innocuous and general, but intimate and extensive: i.e. subscriber registration pages, connection records, length of service, Internet device identification number, source and destination Internet protocol addresses, and more (United States, 2011a, pp. 7-8). Twitter was also told not to disclose the request to the people concerned, and to stay quiet about the whole thing. It did none of this.

Instead, the company mounted a serious legal challenge to the Justice Department’s “secret orders” and pushed the envelope in interpreting what it could do to protect its subscribers’ information (McCullagh, 2011). In Twitter – Wikileaks Case #1, the social media site won a small victory by gaining the right to at least tell Jónsdóttir, Applebaum and Gongrijp that the DoJ was seeking information about their accounts (United States, 2010). They were given 10 days to respond before it was compelled to comply with the DoJ order. It also took the extra step of recommending that they seek legal help from the Electronic Frontier Foundation (EFF), a public interest law watchdog on all matters digital and about internet/cyberspace governance (a copy of Twitter’s letter to Gongrijp is available at Gongrijp, 2011).

The EFF has represented Josdottir on the matter since, while Twitter’s lead counsel, Alex MacGillvray, has stood for the company. Interestingly, Iceland has also weighed in by strongly criticizing the US over Jónsdóttir, while a group of 85 European Union Parliamentarians condemned the United States’ pursuit of Wikileaks. They were especially critical about how the US was harnessing internet giants to its campaign. They “failed to see” how, among other things, the Twitter Order could be squared with Article 19 of the Universal Declaration of Human Rights. More to the point, they worried the United States’ actions were contributing to the rise of a

“national and international legal framework concerning the use of . . . social media . . . [that] does not appear to provide sufficient . . . respect for freedom of expression, access to information and the right to privacy” (Intra-Parliamentary Union, 2011).

The first Twitter – Wikileaks case, or “Twitter Order”, was a shallow victory. It allowed the company to inform Jónsdóttir, Applebaum and Gongrijp that they were of interest in the DoJ’s ongoing Wikileaks investigation, but did not prevent the disclosure. Yet, even this shallow victory looks positive relative to how easily Amazon, Apple, eBay (Paypal), Mastercard, Bank of America (Visa), everyDNS, etc. enlisted in the United States’ campaign against Wikileaks. Twitter staked out a decidedly different position that insisted upon the rule-of-law, speaking out in public and going beyond what was necessary to help its subscribers ensure that their rights, and personal information, are respected.

The full perversity of these circumstancs only came fully into light in the Twitter – Wikileaks Case #2, when Jónsdóttir, Applebaum and Gongrijp appealed part of the first case to overturn, and thus prevent, the requirement that Twitter hand over their account details to the DoJ (United States, 2011a). The U.S. District Court‘s decision in the case in November 2011 had direct results and some potentially far sweeping implications.

The first direct result, as we have seen, is that Twitter had to hand over Jónsdóttir, Applebaum and Gongrijp’s subscriber information. Another, however, is that they have no right to know whether the DoJ has approached Facebook, Google or other Internet companies with secret orders, and if so, for what kinds of information, and with what results (p. 52). The courts seem to believe that neither they nor the public-at-large have the right to know the answers to these questions. For their part, Google, Facebook and Microsoft (Skype) have stayed silent on the affair despite their frequent pontification about internet freedom in a generic sense and mostly in relation to ‘axis of internet evil’ countries, such as Saudi Arabia, China, Russia and Iran, among a rotating cast of others.

If these results are not discouraging enough, more sweeping implications flow from two other directions in the second Twitter – Wikileaks ruling. The first is the poor analogy the court draws between the internet and banks to ground its decision as to why companies of the former type must hand over subscribers’ information just as readily as the latter do when served with a court order. There is a lot of potential discussion in this point alone, but for now it suffices to say that thinking about social media in terms of banking, insurance and clients is a long way from comprehending the internet as a public communications space.

Of more interest for here is the mind-boggling claim that internet users forfeit any expectation of privacy – and hence, privacy rights – once they click to accept internet companies’ terms of service policy. As the court put it, Jónsdóttir, Applebaum and Gongrijp “voluntarily relinquished any reasonable expectation of privacy” as soon as they clicked on Twitter’s terms of service (United States, 2011a, p. 28). Thus, instead of constitutional values, law or social norms governing the situation, the court ruled that privacy rights are creatures of social media companies’ business models. Social media users, according to the court, would have to be woefully naive to expect that privacy is a priority value for advertising-driven online media, given that almost the entire business model of major Internet companies is about collecting and selling as much information about audiences as possible.

But this is ridiculous because Twitter, Facebook and Google’s terms of service policies are about maximizing the collection, retention, use and commodification of personal data, not privacy. It is as if the ruling is intentionally out of whack with the political economy of the internet so as to give the state carte blanche to do with digital intermediaries as it pleases. Christopher Soghoian (2011) captures the crux of the issue in relation to Google, but his comments apply to Internet companies in general:

Google’s services are not secure by default, and, because the company’s business model depends upon the monetizaton of user data, the company keeps as much data as possible about the activities of its users. These detailed records are not just useful to Google’s engineers and advertising teams, but are also a juicy target for law enforcement agencies.

Conclusions and Implications: Wikileaks, the Networked Fourth Estate and the Internet on Imperiled Ground

Things don’t have to be this way. The idea that privacy rights turn on the terms of service policies of commercial internet companies rests upon a peculiarly squinty-eyed view of things and leverages the mass production and storage of personal data enabled by Twitter, Facebook, Google and so forth for the advantage of the state. But even if we took corporate behaviour as our moral compass, Twitter has occasionally distinguished itself, as it did during the London riots/uprising in August 2011 by refusing to comply with the UK government’s requests to shutdown its service and handover users’ information, while Facebook complied. Thus, even by the standards of corporate behaviour, Twitter’s behaviour could cultivate a higher sense of privacy amongst its users.

Concentrated Internet Markets and Small Details: Changing the business model of internet companies to minimize the collection, retention and disclosure of personal information, as the EFF recommends and as some non-commercial sites such as IndyMedia sources do, would be helpful. Sonic.net, a small ISP with 45,000 internet subscribers in the San Francisco area, and which is also implicated in the Wikileaks case because Jacob Applebaum, a key figure in the Twitter – Wikileaks case, as we saw above, has been one of its subscribers, does just this. Most ISPs, in contrast, take the opposite view, as a cursory review of the terms of service policies from AT&T, Comcast. Verizon and Time Warner – the big four ISPs in the U.S. that account for just over 60% of internet access revenues (Noam, 2012) – illustrates. While Sonic.net may offer a model of a free and open internet that maximizes its users’ privacy by minimizing data collection and retention, the fact of the matter is that with less than .05 percent of the US internet subscriber base, it is easily ignored.

Ultimately, the relevant measuring rod of communication rights is not corporate behaviour or the market, but legal and international norms. Social norms govern how we disclose personal information in complex, negotiated and contingent ways, as well (Nissenbaum, 2011). Internet companies’ terms of service policies and the Twitter – Wikileaks cases largely ignore these realities, and thus are out of touch. These issues as well as the fact that the vast majority of people do not even read online terms of service policies — and those who do more often than not do not fully understand them — were brought to the court’s attention, but brushed aside. The decision at least makes it clear that the hyper-commercialized ‘free lunch’ model of the Internet comes with a steep price: our privacy rights and an entire industrial arrangement poised to serve as the handmaiden of the national security state.

The Virtues and Vices of Twitter: It is against this backdrop that the significance of what Twitter has done is clear. It has not flouted the law, but has been hoisted upon its own petard on account of its “business model”. But this is an irreconcilable contradiction of capitalism, to use a marxist formulation, and won’t be solved by simply changing Twitter’s business model. Nonetheless, Twitter went beyond just complying with the law to afford as much respect for users’ rights as circumstances allowed. We might also ask if Twitter’s recent adoption of a “transparency report” chronicling government requests for user information and to take-down certain content reflect lessons learned by the company in the midst of the anti-Wikileaks campaign as well?

There is no need to pretend that Twitter is the epitome of virtue, because it is not. While Google, WordPress and others have all signed on to the broad statements of principles regarding privacy and online free speech rights in the Global Network Initiative, for example, Twitter, along with Facebook, has not. But in this area, pontificating is rife, and while Google preaches transparency and open information absolutism, it has said nothing direct or substantial about the U.S.’s treatment of Wikileaks, or even if it has been implicated in the campaign.

Deepening National Security Imperatives: The U.S. governments’ campaign against Wikileaks further entrenches the post 9-11 securitization of the telecom-internet infrastructure, in the U.S. and globally, given the reach of the most well-known US telecom and internet giants (Risen & Lichtblau, 2006; Calame, 2006). Some courts have condemned expansive claims of state secrets and unbound executive powers when it comes to national security matters, but others seem to grant the state a blank cheque (United States, 2006; United States, 2011b). When the law has not proved serviceable, as the earlier discussion suggests, important U.S. government figures have tip-toed around its edges, compliant private companies in tow, to get what it wants. Congress has also stepped in occasionally to make legal what before was not, as in the passage of the much revised and expanded Foreign Intelligence Services Act (FISA) in 2008, which is now up for renewal again and set to pass with little opposition in Congress (United States, 2008).

The Global Dimension: The campaign against Wikileaks cannot be kept to narrow confines and readily spills over into wide ranging areas, including diplomatic and global internet policy angles, too. Nation-States, and the US in particular, are flexing their muscles and attempting to assert their sovereignty over cyberspace – a point that defines the Wikileaks case. Scholars such as Lawrence Lessig, Ropald Diebert, Jonathan Zittrain, Jack Goldsmith and Timothy Wu, among many others, have long shown that cyberspace is no more immune to government intervention than you or are I are immune to the laws of gravity. Struggles over the Internet Corporation for the Assignment of Names and Numbers (ICANN), the rift between Google and China, and the United State’s campaign against Wikileaks clearly expose that fallacy for what it is. Legitimate criticisms of U.S. dominance of critical internet resources has been a staple of global internet politics since the ITU’s tussles with ICANN in the late 1990s, through WSIS I & II (2001 – 2005), to the creation of the Internet Governance Forum (2005), and back again to the ITU in 2012 (Mueller, 2010). The Wikileaks case offers a rational basis for such concerns. Criticisms of the U.S. in the Wikileaks case by EU parliamentarians, for instance, are of this kind.  The Guardian newspaper in the UK made the same point, too, by choosing Jónsdóttir, Assange, Applebaum and Twitter’s chief legal counsel, Alex MacGillvray, for its list of twenty “champions of the open internet” in April 2012 (Ball, 2012). Many of the awards bestowed upon Wikileaks by respectable human rights and free press organizations before and after the organization’s Collateral Murder video, war logs and Embassy Cables trilogy in 2010 are of a similar kind.

The problem, however, is that legitimate criticism are often mangled when mixed with attempts by strong states in authoritarian countries to use them as a Trojan Horse to smuggle in even less appealing attempts to dominate their own sovereign slices of the internet. A balkanized collection of Web 3.0, nationally-integrated internet media spaces is the result.  To the extent that the anti-Wikileaks campaign feeds such a pretext and fuels the ‘clash of sovereigns’ on the internet, it is unhelpful.

At the opposite end of the spectrum, the Twitter-Wikileaks rulings may serve the U.S. government’s bid to drive Wikileaks out of business well, but they have also lit a fire in the belly of hactivist groups like Anonymous and LulzSec, for whom such things are their raison d’etre. It may not be too much to suggest that the whiff of the anti-Wikileaks campaign fresh in the air helped to bring about the demise of recent attempts to strengthen national and international copyright laws – e.g. SOPA, PIPA and ACTA — given that, like the campaign against Wikileaks, each sought to leverage critical internet resources to control content and further restrict what people can do with their internet connections. If that, in fact, is the case, perhaps the battering of Wikileaks may have unintentionally served a noble cause.

Perhaps we can take solace in that and the fact that the distributed nature of the Internet means complete copies of Wikileaks files have been scattered across the planet, beyond the reach of any single state, no matter how powerful: the ultimate free speech trump card in a way. Yet, the fact that Wikileaks is now floundering, one of its founding figures on the lamb, and its funding down to a tenth of what it once was means that we ought not be so sanguine in our views. Happy stories about digital democracy should not deter us from the harsh reality that important open media principles have already been badly compromised, and more are at stake yet. Indeed, the deep ecology of the Internet is at stake, and so too is how we will conduct our lives in this highly contested place.

The ITU and the Real Threats to the Internet, Part IV: the Triumph of State Security and Proposed Changes to the ITRs

This is the fourth in a series of posts on the potential implications of proposed changes and additions to the ITU’s international telecommunications regulations (ITRs) on the internet (earlier posts are herehere and here).

As we assess these potential implications it is necessary to sort out charges that are, in my view, overblown and alarmist versus those that have merit based on a close reading of the relevant ITU texts. I want to be clear that while I think that many of the charges being leveled at the ITU are trumped up baloney, there are actually many reasons to be concerned. I’ll briefly reprise what I see as the over blown claims (OBCs), then set out the most important real areas of concern.

Over Blown Claims (OBC)

(OBC1): The ITU & the Net: The claim that new rules being proposed for the WCIT this December could give the ITU authority over the internet, when currently it has none, is one OBC (see herehere and here), as I laid out in blog post two.

(OBC2) The Global Internet Tax: This is the claim that some countries want to meter internet traffic at their borders, a kind of tax that Facebook, Google, Apple, Netflix and other internet content companies would supposedly be forced to pay to reach users on the other side of the toll – simultaneously serving to fund broadband internet upgrades in foreign countries, constricting the free flow of info, and keeping people sealed off behind the closed and controlled Web 3.0 national internet spaces that are being built in Russia, China, Saudi Arabia, Iran and other repressive states (see here and here).

The kernal of truth in this matter is that European telecom operators have proposed to establish a “fee-for-carriage” model – like cable tv – that would allow them to charge big internet content companies according to the volume of traffic they generate. I don’t like it at all. It is a full-scale assault on network neutrality. Google hates it too (Ryan & GlickCerf NYTCerf Congress). Net neutrality folks should be up in arms, and some are.

The problem at the root of the critics’ assertions, however, is that the proposal by ETNO is not unusual but embodies the same “fee-for-carriage” model that telecom carriers such as AT&T, Comcast, Bell, Telecom NZ, and others have pursued for the past decade (see post 3). It is wrong to construe the demand to make internet companies pay for carriage as a tax, let alone a diabolical scheme by authoritarian governments to take-over the internet.

In addition, the idea of an internet metered at the border overlooks possible additions to Art. 3.7 of the ITRs that, as discussed in the last post, “enabl[e] direct international internet connections” between countries. “Special Arrangements” set out in Art. 9 of the constitution also means that telecom and internet companies can strike whatever deals they want to create end-to-end connectivity, so long as both countries on either end agree. Again, markets and contracts rule, not some kind of cyber-wall of Berlin.

(OBC3) Spam, Spam, Spam: The third, mostly bogus claim is that proposals to add references to spam in several places in the ITRs are the thin edge of a wedge that could lead to internet content regulation (Article 2.13; Art. 4.3a; and proposed new Art. 8A.5 and 8B). The proposal, however, urges countries to adopt “national legislation” covering spam – as many already do – and “to cooperate to take actions to counter spam” and “to exchange information on national findings/actions to counter spam”. This hardly seems like the thin of a wedge and, moreover, Article 2.13 explicitly excludes content as well as “meaningful . . . information of any type”.

Still, the U.S. is strongly opposed to such measures on the grounds that technological solutions are better suited to the problem than international law. Overkill, it says, and at odds with technological neutrality. Australia calls it too broad, Canada doesn’t like it either, and Portugal is still looking to see how it meshes with EU law. This is hardly an endorsement for the ‘global regulation of spam’ by the supposed axis of internet evil offering it, but the proposal is hardly tantamount to Armageddon, either (for annotated notes outlining countries’ views of proposed changes and additions, see here).

State Security, Splinternet and the Pending Death of the Open Global Internet: the Real Threats to the Internet

Now if you think I’m simply lining up as an apologist for the ITU, you’d be wrong, as the rest of this post makes clear.  Several proposals now on the table (see below) would cast a devastating blow to the internet by blessing the efforts of individual countries to build their own closed and controlled national Web 3.0 internet spaces today. In fact, many countries, including Anglo-European countries, are doing just that, although to a degree and of a kind that is demonstrably different than what is being built in the list of ‘rogue states’ that are often identified with such projects: Russia, China, Saudi Arabia, Iran, etc.

In fact, several sections of the ITU’s current framework already allow these kinds of projects, before any changes. Proposals to change or add new elements to the ITRs could make matters even worse, however.

Intercepting, Suspending and Blocking the Flow of Information since the 1850s: the Dark Side of the ITU

To see how, we need only to realize that nation-states have always claimed unbridled power to control national communication spaces, and to intercept, suspend and block the cross-border flow of information. The authority to inspect, suspend and cut-off communications that “appear dangerous to the security of the State or contrary to its laws, to public order or to decency” was first asserted by European governments in the 1850s during their drive to squelch popular rebellions. That authority was acknowledged by the Austro-German Telegraph Union and Western European Telegraph Union at the time, before being folded into the ITU when these organizations merged in 1865 (see Constitution, Article 34). That legacy hangs over the current WCIT talks like a dark cloud.

The supremacy of national security has been retained ever since and forms the basis of Articles 34, 35 and 37 in the ITU’s current Constitution, as the extracts below illustrate:

“Member States reserve the right to stop . . . the transmission of any private telegram which may appear dangerous to the security of the State or contrary to its laws, to public order or to decency” (Art. 34(1), Stoppage of Telecommunications, emphasis added).

“Member States also reserve the right to cut off, in accordance with their national law, any other private telecommunications which may appear dangerous to the security of the State or contrary to its laws, to public order or to decency” (Art. 34(2) Stoppage of Telecommunications, emphasis added).

“Each Member State reserves the right to suspend the international telecommunication service, either generally or only for certain relations and/or for certain kinds of correspondence” (Art. 35, Suspension of Services, emphasis added).

“Member States agree to take all possible measures . . . to ensur[e] the secrecy of international correspondence[, but] . . . reserve the right to communicate such corre­spondence to the competent authorities in order to ensure the application of their national laws or the execution of international conventions to which they are parties” (Art. 37, Secrecy of Telecommunications).

One proposal by the United Arab Emirates aims to replicate these measures in three new clauses to be added to the ITRs (Art. 7.3, 7.5 and 7.6, respectively), allowing such norms to do double-duty as high-level principles and day-to-day regulatory guidelines. The U.S. opposes the move, not because it sees telecoms and internet as a kind of global commons beyond the reach of harsh geopolitical concerns, but likely because the ITU already reflects the fact that national security concerns trump everything, and because it would not be unduly constrained by global norms anyway.  The US response to the UAE proposal is clear on the point: “We support retaining these provisions in the CS [constitution] and do not agree with . . . duplicating them in the ITRs”.

Cyberwar and the Fifth Domain of Battle: Militarization of the Internet versus Global Commons

The U.S. also refuses to be drawn into the proposals bandied about by Russia (mostly), China and a few other powerful military states over the past decade, this time to add a sprawling new section to the ITRs covering cybercrime, national security and cyberwar issues (Article 8A). The U.S. has rebuffed these moves for the same reasons mentioned above and, more to the point, because behind the veil of its global-internet-freedom-as- foreign-policy rhetoric is its more pressing conviction that the internet is now the fifth domain of war, alongside land, sea, air and space, a terrain where it grandiosely seeks to assert total infosphere dominance.

Seen in this context, overtures to “network defense and response to cyberattacks” (Article 8A.1) have no chance of adoption, even if setting aside the internet as a global commons under ITU protection outside the field of war might be a good idea. Moreover, and however, that rubicon has already been crossed with Russia believed to have been behind cyber-attacks against Georgia in 2008 and the Obama Administration’s recent admission that it played a role in the Stuxnet attacks against Iranian nuclear facilities.

Bearing those points in mind, Russian proposals to carve out new rules of cyberwar are hypocrisy, while the acknowledged facts of U.S. military policy means that it will dismiss such notions out of hand. Based on this, worries that additions to the ITRs intended to deal with such matters could serve as a Trojan horse for repressive controls over the internet can probably be safely tossed aside. It is worth noting, however, that amidst all the hand-wringing over the ITU threat to the internet, no one, as far as I know, touches upon how the hard realities of military power shape global telecom and internet policy, instead settling into numbing nostrums that pit the state against the individual.

A Laundry List of Many Items with Potentially Really Big Implications

Beyond the stance of the U.S. on the above matters, and questions of network defense and cyberwar, Article 8A starts off innocently enough, but quickly opens into a chamber house of horrors. It blandly refers to “confidence and security” in the title and the need to garner trust in online spaces (true enough), followed by a list of technical-sounding proposals about network security, data retention, data protection, fraud, spam, and so on.

Some of these principles are worthy of discussion, but the way they have been teed up for WCIT utterly fails to inspire confidence or hope. The measures are spearheaded by Russia and supported by China, with the latter telling us in the notes accompanying the proposals that new tools and rules are needed to:

“. . . protect the security of ICT infrastructure, misuse of ICTs, respect and protection of user information, build a fair, secure and trustworthy cyberspace . . . [with] new articles on network security in the ITRs”.

There is also a sundry list of other items included in the proposed new Article 8A as well as others drawn from recommendations at past conferences that deal with child online protection, fraud, user identity, etc.  One by one, most of these measures are reasonable, and most countries are dealing, on their own and in cooperation with one another, with all of them already.

Looking across all these proposals, however, reveals a raft of threats that, in their entirety, would usher in the foundation of controlled and closed national internet spaces that are subordinate to the unbound power of the state in every way:

  • Anonymity and Online Identity are implicated in repeated references to the need for users to have a recognized identity. This comports well with laws in countries such as China that require internet users to tie their online identity to the ‘real-name’ identity but if identifiability is the first step to regulability, as Lawrence Lessig claimed a decade ago, than this raft of references insisting on the need for online identity is a problem (e.g. proposed new Art. 3.6, 6.10, 8A.7, 8A.8). As ISOC states, such moves entail a “very active and inappropriate role in patrolling newly defined standards of behaviour on telecommunication and internet networks and in services”. I agree;
  • Privacy as well as Data Collection, Retention and Disclosure are mentioned as being critically important values several times (Articles 3.6, 8A.1, 8A.3, 8A.4) but are hemmed in by the repressive national security norms described above. While the wave of telecom and web monitoring bills currently under consideration just in the US (CISPA), Canada (Bill C-30) and the UK (Communications Data Bill) suggests that there is a need to reign in governments’ strong inclination to apply new surveillance and security measures to the internet, proposed changes to the ITRs would likely pressure telecom providers and ISPs to maximize rather than minimize the amount of personal data they collect, retain and disclose to state authorities.
  • Internet content regulation is seen as a threat scattered across many proposed changes to the ITRs but I think most of these claims are, as noted above, overblown. This threat, however, does loom large, but is mostly concentrated in a proposal to add the new Article 8A to the ITRs. Focusing our attention there, I agree with ISOC that the new rules could speed along and legitimate the development of national internet content regulation.

The worst examples of this come in two places in the new Article 8A.4 put forward by Russia. The first appears in a passage that reaffirms people’s “unrestricted” right to use international telecom services but immediately clips such rights with the caveat: “except in cases where . . . telecommunication services are used [to] . . . interfer[e] in the internal affairs or undermin[e] the sovereignty, national security, territorial integrity and public safety of other states”. One can only imagine how such measures might steel the hand of governments intent to interrupt the flow of tweets, Facebook updates, and other social media interactions that have played an important role, for example, in Arab Spring, the Occupy Wall Street protests, Wikileaks, etc. This is an effort to replicate the national security values already found in the Constitution in the ITRs, similar to the proposals of the UAE outlined above, and should be opposed for the same reasons.

Things are made worse yet by including what we might call the ‘anti-Wikileaks’ clause immediately afterwards, a provision that would trump people’s right to communicate when telecom-internet facilities are used “to divulge information of a sensitive nature” (Art. 8A.4). Leaking ‘sensitive information’, however, is not a crime and the idea that the “sensitive nature” of info will serve as a standard has no reference in free speech/press law and ideals. It also assumes an unbound conception of the state’s security interests, and gives it carte blanche to do as it pleases.

It is impossible to reconcile such prohibitions against info disclosure/publishing/leaking with the goal of furthering the development of a global and open internet or the right to communicate and of the free press. Accepting such a standard would be a much more potent Wikileaks killer than the heavy-handed measures that have already been used by the U.S. because it would give a legal sheen to what the U.S. has had to do so far by skirting around the edge of its on laws. Through such a clause, states would have free reign to crackdown on whistle-blowers with impunity and without limits.

A Few Final Thoughts

This exercise has forced me to change my views. The proposed additions and changes to the ITRs are worse than I thought. It is important that proposals now on the table for discussion at the upcoming WCIT get as much critical scrutiny as they can, and seen in that light, the WCITleaks site created by the folks at the Technology Liberation Front is a very useful tool.

That said, the analysis of the ITU and the proposed changes afoot have been largely strained through the prism of ideology, indiscriminately jumbling together overblown claims with real insights. As far as I can see, it is not the myriad of small changes to one section of the ITRs after another that constitute the major problem, but rather a set of issues that are mostly clustered in proposals by Russia, and supported by China, to add new sections to Article 8.  The damage such proposals could do to unsettled internet policy issues related to anonymity and online identity, privacy and personal data protection, as well as internet content regulation are enormous and can hardly be exaggerated.

On a more modest note, I understand that there is a battle over language that will occur in other sections, notably in Articles 1 and 2 over the definition of telecoms, with those who believe that the ITU does not cover the internet rejecting at every turn proposals by those who do to pepper the ITRs with explicit references to the internet. I believe the ITU’s authority already covers the internet, but understand that the politics of language will play a big role as countries stake out their turf on the matter.

I see no new global internet tax on the horizon and do not believe that references to spam are the thin of the wedge that will lead to national internet content regulations being imposed in one country after another. The truly awesome power of the state over communications, including the internet, however, comes into view as soon as we realize how stilted the existing ITU framework is in favour of national security imperatives.

Indeed, national security appears to trump everything, including the right to communicate and the free press. The fact that such norms are derived from a history of suppressing popular uprisings in Europe ought to make us think long and hard about their continued role amidst the political uprisings and revolts sweeping the world. Attempts by the UAE and Russia (with the support of China) to replicate repressive national security values in the ITRs through additions to Articles 7 and 8, respectively, do pose a threat to an open internet and political protest the world over.

This is important, too, because while I doubt that such measures have much chance of succeeding, they mesh with certain trends that define our times, with moves aplenty to impose comprehensive telecom and web monitoring plans in one country after another, as well as the copyright maximalist agenda that is turning telecom-ISPs across the world into internet cops on behalf of the media and entertainment industries. Such initiatives will continue with or without changes to the ITRs, which also highlights the reality that the ITU’s influence in these affairs is limited and not omnipotent.

Even if the most repressive aspects of proposed changes and additions to the ITRs were approved, this would not bind the whole world to implementing a single internet model. It would, however, bless the national Web 3.0 spaces that are already being built on the basis of three layers of control: (1) the systematic use of filtering and blocking to deny access to restricted websites and the recognition of such measures in national law; (2) dominance of national internet-media spaces by national champions (Baidu, Tencent, Yandex, Vkontakte, Facebook, Google, Apple, etc.) and (3) the active use of government-driven internet-media-communication campaigns (propaganda) to shape the total information environment (See Deibert & Rohozinski, ch. 2). The changes to the ITRs being sought by some countries, notably Russia and China, would add a fourth layer – international norms steeped in 19th century models of state security – that would further entrench the web 3.0 model and further lay waste to more important international norms associated with the right to communicate and free press.

Follow

Get every new post delivered to your Inbox.

Join 129 other followers

%d bloggers like this: