the medium is the message

In the past week I found it interesting how there were two separate initiatives taken by companies that were better known for the distribution of content into the area of content creation. First up was the announcement by YouTube of its NextUp program:

Today we’re announcing YouTube Next’s second initiative, designed exclusively for up-and-coming YouTube Partners: YouTube NextUp. YouTube NextUp is about accelerating the growth of the next big YouTube stars. Up to 25 Partners from around the United States will be selected for the development program, which offers:

  • $35,000 in funding to produce a new project, purchase new tools or advance their overall YouTube careers
  • A spot at a four-day YouTube Creator Camp in which they’ll benefit from 1:1 mentoring and learn an array of production techniques from leading industry and YouTube experts
  • Promotion of their final work and channel
  • The opportunity to become better connected with a special community of aspiring and talented content creators from around the world

Second was an announcement from Netflix that they are, in a way, “funding” the production of an original TV show. Here`s how they describe it:

In all of these cases, the shows are produced before we bring them to Netflix. “House of Cards” represents a slightly more risky approach; while we aren’t producing the show and don’t own it, we are agreeing to license it before it is successfully produced.

TechCrunch has a piece on the Netflix announcement which is, shall we say, rather enthusiastic about the announcement (one of the hints being the title – “Netflix Original Content Is Much More Than A Strategy Shift — It Could Shift An Industry“):

But with House of Cards, the game changes. For the first time, they’re going to get people signing up to Netflix to get first access to content. And if it’s as good as the talent behind it suggests, they might get a lot of people signing up for that very reason.

And if that’s the case, they’ll be doing a lot more of these deals. And that would effectively make them a premium cable television channel — like HBO or Showtime. But they’ll be one with thousands more pieces of content for a lower monthly price. And they’ll be one not burdened by any artificial show times. Most importantly, they’ll be one not burdened by the cable television model — at all.

If Netflix’s new gamble here works, this is the absolutely the future. In three years, we won’t be paying $75 a month to a giant cable conglomerate. We’ll be paying $8 to Netflix and other players that pop up — like HBO (by themselves), perhaps. Sure, there will still be the monthly fee for Internet. But most of us are already paying that. We’d just be removing the ridiculous $75 cable television fee that gives us thousands of channels with content only on at a certain time — and most of which we don’t want.

It’s interesting that TC mentions HBO. If memory serves they also used to be purely a distribution channel until they started getting into content production.

To some extent, I agree with the TC piece (though perhaps not as enthusiastically). I think watching TV on cable (especially with a DVR) is a rather horrific nightmare as compared to watching streamed, on-demand content through the internet. Why flips through pages and pages of schedules or programming grids on digital tuners to figure out what you want to watch and when you can watch it, rather than looking for the show you want to watch, when you want to watch it, and clicking? I really do think the latter form of delivery will become more and more prevalent over time.

On the other hand, I don’t think either Netflix or YouTube venturing into somewhat more direct participation in content creation (and to be clear, in neither case are they actually producing the content) is all that much of a sea change itself. Vertical integration, whether in the entertainment industry or elsewhere isn’t all that new, nor has it necessarily changed the experience of end users. Would I care whether I could watch the show on demand whenever I wanted through the internet, wherever there is a browser, rather than having to figure out when it would be broadcast and either buying a DVR or making sure I’m am home? Yes, definitely. Would I care if, instead getting first crack at the show through Netflix, that I paid David Fincher’s production company (or anyone else for that matter) directly for the privilege of watching it first streamed through his website? Not so much.

Will this really shake up the cable industry and/or kill it? I guess that depends in part on what pipe you’re using to connect to the internet to view your content.

 

canadian hacker puts judge in prison

Odd where you find stuff and don’t find stuff. Noticed this story in The Inquirer. The nub:

The case was all started when a Canadian hacker Brad Willman broke into the judge’s Irvine home computer and discovered sexually explicit images of young boys and a diary that revealed Kline’s fantasies involving young boys. A subsequent police search of the Judge’s court computer revealed more images and more dodgy Web sites.

Kline is the judge in question. In Orange County. Apart from the irony of the situation I thought it was somewhat interesting that it didn’t (apparently) see much coverage in Canada, notwithstanding the origins of the hacker in question.

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…


Wikiality – Part III

Bit of an elaboration on a previous post on the use of Wikipedia in judgements. I cited part of a New York Times article, which had in turn quoted from a letter to the editor from Professor Kenneth Ryesky. The portion cited by the NYT article suggested that Ryesky was quite opposed to the idea, which wasn’t really the case. He was kind enough to exchange some thoughts via e-mail:

In his New York Times article of 29 January 2007, Noam Cohen quoted a sentence (the last sentence) from my Letter to the Editor published in the New York Law Journal on 18 January 2007. You obviously read Mr. Cohen’s article, but it is not clear whether you read the original Letter to the Editor from which the sentence was quoted.

Which exemplifies the point that Wikipedia, for all of its usefulness, is not a primary source of information, and therefore should be used with great care in the judicial process, just as Mr. Cohen’s article was not a primary source of information.

Contrary to the impression you may have gotten from Mr. Cohen’s New York Times article of 29 January, I am not per se against the use of Wikipedia. For the record, I myself have occasion to make use of it in my research (though I almost always go and find the primary sources to which Wikipedia directs me), and find it to be a valuable tool. But in research, as in any other activity, one must use the appropriate tool for the job; using a sledge hammer to tighten a little screw on the motherboard of my computer just won’t work.

Wikipedia and its equivalents present challenges to the legal system. I am quite confident that, after some trial and error, the legal system will acclimate itself to Wikipedia, just as it has to other text and information media innovations over the past quarter-century.

Needless to say, quite a different tone than the excerpt in the NYT article. Thanks for the clarification, Professor Ryesky.

Wikiality

Interesting post on the Wellington Financial Blog about “Wikiality” – the practice of taking stuff in Wikipedia as the truth, or, to quote: ““a reality where, if enough people agree with a notion, it becomes the truth.”

JN notes that Wikipedia has been cited by the courts, and this is reason for concern. A snippet:

The practice poses two problems:

  1. The references may be inaccurate; and
  2. Even if accurate, the references are subject to change at any point in the future, making it difficult for any future decisions to refer back to the original or understand the context in which it was made.

Given recent reports of Microsoft offering to pay individuals to make changes to certain Wikipedia articles in which they have a vested interest, the credibility of the site as a definitive reference source again comes into question.

A few of my colleagues at the firm also expressed bemusement when a recent case in Ontario (don’t have the citation, sorry) also cited Wikipedia.

I am quite a big fan of Wikipedia. It is, I think a rather useful and handy tool to refer to from time to time. Do I take it as the gospel? No. Would I use it if I were trying to concoct an antidote for a poison that was about to kill me? Probably not. Would I cite it in a legal research paper? Possibly. In fact, quite likely.

Although Wikipedia is by no means without its weaknesses, it also has its strengths. Sure, there is a possibility of inaccuracy. But then again, isn’t something less likely to have inaccuracies if it is reviewed (and edited) by more eyes (and more minds)? Isn’t it more likely that if there is a dispute about what is and isn’t correct, it will come to light, just like the Microsoft incident?

And what source, can it be said, is free of inaccuracies? Certainly not The New York Times. Although the Gray Lady is quick to point out that it was “deceived” by an errant reporter, it is less quick to reflect on the fact that it published fabricated stories. That of course is the clearest example, but history is rife with examples of inaccurate or misleading stories in the press. Less clear, of course, is media bias. And one only needs to refer to Manufacturing Consent. I don’t necessarily agree with all that book has to offer, but it certainly provides some food for thought.

What about scientific publications? Hmmm. Well. Again, truth is quite often relative. The clearest examples, are, of course, outright fabrication. Nonetheless, Dr. Hwang Woo-suk’s paper on producting the first cloned stem cell line was considered the truth for several years, until he was discredited. And more generally speaking, is it not true that, in the world of science, what is considered to be the truth is what most scientists believe to be true? Is that not the system of peer review? A great read on this topic is The Structure of Scientific Revolutions (as an aside, its also the book that introduced the phrase “paradigm shift” into popular parlance). I won’t bore you with details, but suffice it to say that, at the end of the day, science, at least in concept, may not be that far from wikiality.

My point isn’t necessarily to skewer existing sources of “truth” but rather to point out that such sources aren’t necessarily more reliable or accurate, or less fallible, than something like Wikipedia.

And as for things changing? Make a copy.


Hmmm…. interesting…. oops, might have just violated a patent…

Well, not quite. Being a bit tongue in cheek. But continuing on a theme of interesting patents, a story in Boing Boing referring to Flickr filing a patent for “interestingness” and how some feel they shouldn’t be able to:

I read the Flickr patent this morning and FWIW I don’t think Flickr should be able to get a broad patent on “interestingness”. There’s a very large number of papers in the image processing and collaborative filtering areas that all define various notions of relevance, interestingness, salience, or novelty. A specific innovative technique might be patentable, but not the general idea of computing how interesting an image or media object is to a person or set of people.

Of course, to flickr’s credit, I’m not sure whether flickr’s patent is so broad to be too broad. It does, after all, go through and enumerate certain steps in its method that don’t necesarily need to be steps in other methods of determining “interestingness”, so I don’t think it really goes so far as to patent the general idea of computing or figuring out how interesting a media object is. If it were,  well, you might be in violation right now. That is, if you find this entry interesting. Or at least more or less interesting that some other entry…