the medium is the message

In the past week I found it interesting how there were two separate initiatives taken by companies that were better known for the distribution of content into the area of content creation. First up was the announcement by YouTube of its NextUp program:

Today we’re announcing YouTube Next’s second initiative, designed exclusively for up-and-coming YouTube Partners: YouTube NextUp. YouTube NextUp is about accelerating the growth of the next big YouTube stars. Up to 25 Partners from around the United States will be selected for the development program, which offers:

  • $35,000 in funding to produce a new project, purchase new tools or advance their overall YouTube careers
  • A spot at a four-day YouTube Creator Camp in which they’ll benefit from 1:1 mentoring and learn an array of production techniques from leading industry and YouTube experts
  • Promotion of their final work and channel
  • The opportunity to become better connected with a special community of aspiring and talented content creators from around the world

Second was an announcement from Netflix that they are, in a way, “funding” the production of an original TV show. Here`s how they describe it:

In all of these cases, the shows are produced before we bring them to Netflix. “House of Cards” represents a slightly more risky approach; while we aren’t producing the show and don’t own it, we are agreeing to license it before it is successfully produced.

TechCrunch has a piece on the Netflix announcement which is, shall we say, rather enthusiastic about the announcement (one of the hints being the title – “Netflix Original Content Is Much More Than A Strategy Shift — It Could Shift An Industry“):

But with House of Cards, the game changes. For the first time, they’re going to get people signing up to Netflix to get first access to content. And if it’s as good as the talent behind it suggests, they might get a lot of people signing up for that very reason.

And if that’s the case, they’ll be doing a lot more of these deals. And that would effectively make them a premium cable television channel — like HBO or Showtime. But they’ll be one with thousands more pieces of content for a lower monthly price. And they’ll be one not burdened by any artificial show times. Most importantly, they’ll be one not burdened by the cable television model — at all.

If Netflix’s new gamble here works, this is the absolutely the future. In three years, we won’t be paying $75 a month to a giant cable conglomerate. We’ll be paying $8 to Netflix and other players that pop up — like HBO (by themselves), perhaps. Sure, there will still be the monthly fee for Internet. But most of us are already paying that. We’d just be removing the ridiculous $75 cable television fee that gives us thousands of channels with content only on at a certain time — and most of which we don’t want.

It’s interesting that TC mentions HBO. If memory serves they also used to be purely a distribution channel until they started getting into content production.

To some extent, I agree with the TC piece (though perhaps not as enthusiastically). I think watching TV on cable (especially with a DVR) is a rather horrific nightmare as compared to watching streamed, on-demand content through the internet. Why flips through pages and pages of schedules or programming grids on digital tuners to figure out what you want to watch and when you can watch it, rather than looking for the show you want to watch, when you want to watch it, and clicking? I really do think the latter form of delivery will become more and more prevalent over time.

On the other hand, I don’t think either Netflix or YouTube venturing into somewhat more direct participation in content creation (and to be clear, in neither case are they actually producing the content) is all that much of a sea change itself. Vertical integration, whether in the entertainment industry or elsewhere isn’t all that new, nor has it necessarily changed the experience of end users. Would I care whether I could watch the show on demand whenever I wanted through the internet, wherever there is a browser, rather than having to figure out when it would be broadcast and either buying a DVR or making sure I’m am home? Yes, definitely. Would I care if, instead getting first crack at the show through Netflix, that I paid David Fincher’s production company (or anyone else for that matter) directly for the privilege of watching it first streamed through his website? Not so much.

Will this really shake up the cable industry and/or kill it? I guess that depends in part on what pipe you’re using to connect to the internet to view your content.

 

web 2.0 principles

Interesting news (by way of an alert from Winston & Strawn) on a series of principles agreed upon by various internet and media companies (though it seems primarily the latter). To wit:

Several of the world’s leading Internet and media companies today announced their joint support for a set of collaborative principles that enable the continued growth and development of user-generated
content online and respect the intellectual property of content owners.

The principles self stated goals are “(1) the elimination of infringing content on UGC Services, (2) the encouragement of uploads of wholly original and authorized user-generated audio and video content, (3) the accommodation of fair use of copyrighted content on UGC Services, and (4) the protection of legitimate interests of user privacy”, though the emphasis would appear primarily to be obligations on operators of services oriented toward user generated content to prevent the misuse of copyrighted materials.

As noted by the Winston article, the principles “…are not a legally binding agreement, and compliance with these principles by a user-generated content service provider does not preclude a copyright owner from filing a complaint for copyright infringement.”

It will be interesting to see the extent to which they’ll be adopted by the marketplace. I have a feeling it may not be all that popular due, in part, to the noteable absence of some of the more prominent user-generated content sites, such the 900 lb video sharing gorilla now under the auspices of the googleplex.

silly lawsuit of the week

OK. Short version of the story in InformationWeek: Woman puts up a website. She puts a “webwrap” agreement at the bottom – i.e. basically a contract that says if you use the site then you agree to the contract. Still some question as to whether such a mechanism is binding, but anyway…

So the Internet Archive of course comes along and indexes her site. Which apparently is a violation of the webwrap. So she sues, representing herself, I believe. The court throws out everything on a preliminary motion by IA except for the breach of contract.

InformationWork observes that “Her suit asserts that the Internet Archive’s programmatic visitation of her site constitutes acceptance of her terms, despite the obvious inability of a Web crawler to understand those terms and the absence of a robots.txt file to warn crawlers away.” (my emphasis). They then conclude with this statement:

If a notice such as Shell’s is ultimately construed to represent just such a “meaningful opportunity” to an illiterate computer, the opt-out era on the Net may have to change. Sites that rely on automated content gathering like the Internet Archive, not to mention Google, will have to convince publishers to opt in before indexing or otherwise capturing their content. Either that or they’ll have to teach their Web spiders how to read contracts.

(my emphasis).

They already have – sort of. It’s called robots.txt – the thing referred to above. For those of you who haven’t heard of this, its a little file that you put on the top level of your site and which is the equivalent of a “no soliciation” sign on your door. Its been around for at least a decade (probably longer) and most (if not all) search engines

From the Internet Archive’s FAQ:

How can I remove my site’s pages from the Wayback Machine?

The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.

Internet Archive uses the exclusion policy intended for use by both academic and non-academic digital repositories and archivists. See our exclusion policy.

You can find exclusion directions at exclude.php. If you cannot place the robots.txt file, opt not to, or have further questions, email us at info at archive dot org.

standardized methods of communications – privacy policies, etc. – more. Question is, will people be required to use it, or simply disregard and act dumb?

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…


Belgian Court Slaps Google News

The short story: a Belgian court has ruled that Google must remove headlines and links posted on its news site for which it did not obtain permission to post, based on copyright law.

Rather unfortunate, I think. Sure, there are cases where some links and even partial reproduction should be prohibited, but in the context of what Google was doing its difficult to see the harm. In fact, I’m a bit surprised that the content owner would have pursued the claim. Google’s take:

“We believe that Google News is entirely legal,” the company said in a statement. “We only ever show the headlines and a few snippets of text and small thumbnail images. If people want to read the entire story they have to click through to the newspaper’s Web site.”

Google said its service actually does newspaper a favor by driving traffic to their sites.

But the court said Google’s innovations don’t get exemptions from Belgian data storage law.

“We confirm that the activities of Google News, the reproduction and publication of headlines as well as short extracts, and the use of Google’s cache, the publicly available data storage of articles and documents, violate the law on authors’ rights,” the ruling said.

If Google News violates authors’ rights, there will be a lot more that does as well. Tons. It will be interesting to see what happens on appeal as it could have rather far-reaching implications – at least in Belgium.