willy wonka’s ip policy

Ran across the article “What’s Good for Willy Wonka is Good for America” while reading about the sad demise of a company called Miller & Kriesel, which I like (liked?) quite a bit. Ken Kriesel referred to this article in an intereview, so I thought it would be worth reading.

Not that I necessarily agree with what the article alludes to when it comes to policy for safeguarding one’s IP, but noneless an interesting take on IP lessons to be learned from the world’s most famous (albeit fictional) chocolatier:

When it came to internal IP theft, Willy Wonka did not mess around, and others can learn from his success.

Oh, BTW, in case you’re curious, M&K were the folks that invented the concept of the subwoofer and, I think, if not invented, at least helped pioneer surround sound. They’re also the folks that Lucasfilm (I guess perhaps until recently) used for all the speakers in all their studios. The story of their demise can be found relatively easily – suffice it to say, think twice before bringing your key IP over to China.

canadians – as bad as the chinese (almost)

Well, this story certainly has got a lot of coverage. I was quite surprised to read in Wired that quite of bit of IP is stolen in Canada. To wit:

But — surprise, surprise — IIPA also wants Canada added to the list of the most egregious violators. That’s right. Canada. According to the IIPA, Canada was responsible for $551 million in lost revenue in 2006, all of it in the business software sector (numbers from other industries were not available). That makes Canada the fourth-worst offender. See the chart here.

I was also at a very interesting speech that Graham Henderson of CRIA gave on the proliferation of counterfeit goods in Canada. Again, though I knew of some counterfeiting of goods going on here, I was a bit surprised at the numbers that were presented and also the types of counterfeiting – everything from extension cords to batteries to pharmaceuticals.

Of course that’s one side of it. And like everything else there are always two side to a story. Michael Geist is quoted in the story as asserting that the IIPA is out of touch with the rest of the world by criticizing countries who have less stringent measures in place than US legislation, which he asserts to be the world’s toughest.

Its interesting to compare this with the MPAA’s position on proposals in the use on fair use, which I mentioned a bit earlier. Perhaps best described like this:

Geist on IP infringement issues in Canada: “Problem? What problem?”

The MPAA on fair use issues in the US: “Problem? What problem?”

And so it goes. <sigh>

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…


Wikiality – Part III

Bit of an elaboration on a previous post on the use of Wikipedia in judgements. I cited part of a New York Times article, which had in turn quoted from a letter to the editor from Professor Kenneth Ryesky. The portion cited by the NYT article suggested that Ryesky was quite opposed to the idea, which wasn’t really the case. He was kind enough to exchange some thoughts via e-mail:

In his New York Times article of 29 January 2007, Noam Cohen quoted a sentence (the last sentence) from my Letter to the Editor published in the New York Law Journal on 18 January 2007. You obviously read Mr. Cohen’s article, but it is not clear whether you read the original Letter to the Editor from which the sentence was quoted.

Which exemplifies the point that Wikipedia, for all of its usefulness, is not a primary source of information, and therefore should be used with great care in the judicial process, just as Mr. Cohen’s article was not a primary source of information.

Contrary to the impression you may have gotten from Mr. Cohen’s New York Times article of 29 January, I am not per se against the use of Wikipedia. For the record, I myself have occasion to make use of it in my research (though I almost always go and find the primary sources to which Wikipedia directs me), and find it to be a valuable tool. But in research, as in any other activity, one must use the appropriate tool for the job; using a sledge hammer to tighten a little screw on the motherboard of my computer just won’t work.

Wikipedia and its equivalents present challenges to the legal system. I am quite confident that, after some trial and error, the legal system will acclimate itself to Wikipedia, just as it has to other text and information media innovations over the past quarter-century.

Needless to say, quite a different tone than the excerpt in the NYT article. Thanks for the clarification, Professor Ryesky.

ITAC – First Canadian Municipal Wireless Conference and Exhibition

Wow – lots happening the last week of May. Also forgot to mention previously the First Canadian Municipal Wireless Conference and Exhibition being organized by ITAC at the Direct Energy Conference Centre at the Canadian National Exhibition in Toronto, May 28-30, 2007:

Whether you live or work in a large urban municipality, a small rural town or village, the impact of wireless applications has already or will soon impact the quality of your life and the services you offer your community. If your organization engages in digital electronic services to customers, e.g., taxpayers, suppliers, emergency service providers, other levels of government, non-profit organizations and associations, you need to learn about the latest proven strategies to ensure the success of your wireless programs.

ITAC’s 1st Canadian Municipal Wireless Applications Conference and Exhibition will not only update you on the latest initiatives of Canadian Municipalities, but will provide you with real case study insights, proven strategies, commentary from leading wireless experts and techniques for deploying wireless applications in your communities. If you are currently engaged, or plan to be engaged, in a municipal wireless project, your attendance at this event is essential.

Thoughts on Quantum Computing

Interesting article in Wired News where they interview David Deutsch who they refer to as the Father of Quantum Computing. He has a kind of low key but interesting take on the recent demonstration of a real, live 16 qubit quantum computer by D-Wave, a Canadian company based out of Vancouver.

Low key insofar as he doesn’t seem particularly enthused about the potential of quantum computers, other than perhaps their ability to be used to simulate quantum systems and of course encryption:

Deutsch: It’s not anywhere near as big a revolution as, say, the internet, or the introduction of computers in the first place. The practical application, from a ordinary consumer’s point of view, are just quantitative.

One field that will be revolutionized is cryptography. All, or nearly all, existing cryptographic systems will be rendered insecure, and even retrospectively insecure, in that messages sent today, if somebody keeps them, will be possible to decipher … with a quantum computer as soon as one is built.

Most fields won’t be revolutionized in that way.

Fortunately, the already existing technology of quantum cryptography is not only more secure than any existing classical system, but it’s invulnerable to attack by a quantum computer. Anyone who cares sufficiently much about security ought to be instituting quantum cryptography wherever it’s technically feasible.

Apart from that, as I said, mathematical operations will become easier. Algorithmic search is the most important one, I think. Computers will become a little bit faster, especially in certain applications. Simulating quantum systems will become important because quantum technology will become important generally, in the form of nanotechnology.

(my emphasis). Interesting thought about being retrospectively insecure. Particularly given spy agencies have, in the past, been sufficiently bold to transmit encoded messages on easily accessible shortwave frequencies.

I imagine the spook shops already have their purchase orders in for quantum crypto stuff (or have developed it already internally). Was a bit surprised by the statement above regarding existing technology for quantum computing. I had heard of some demos a while back, but didn’t realize that there are actually several companies offering quantum cryptography products.

Virtual Diplomacy

Short one as its getting late. Interesting piece on how Sweden is setting up an embassy in Second Life. As most of you know, Second Life is a MMORPG – a virtual world of sorts where people can control computer generated images of people in a virtual world.

That being said, somewhat less exciting than first blush, as the new virtual Swedish embassy will only provide information on visas, immigration, etc. Perhaps not surprising – I mean, its not like you should be able to get a real-world passport through the use of your virtual character. Nor, God forbid, do I hope they’re introducing the bureaucracy of passports to travel through virtual countries….

Wikiality – Part II

There was some traffic on the ULC E-Comm Listserv (on which I surreptitiously lurk – and if you don’t know what it is and are interested in e-commerce law, highly recommended) about courts citing Wikipedia with a couple of links to some other stuff, including an article on Slaw as well as an article in the New York Times about the concerns raised by some regarding court decisions citing Wikipedia. Some excerpts and notes to expand on my previous post:

From the con side:

In a recent letter to The New York Law Journal, Kenneth H. Ryesky, a tax lawyer who teaches at Queens College and Yeshiva University, took exception to the practice, writing that “citation of an inherently unstable source such as Wikipedia can undermine the foundation not only of the judicial opinion in which Wikipedia is cited, but of the future briefs and judicial opinions which in turn use that judicial opinion as authority.”

This raises a good point that I didn’t mention in my previous post. I certainly think Wikipedia is fine to note certain things, but I really, definitely, positively, do not think that it should be cited as judicial authority. In my previous article I thought this was so self-evident I didn’t bother mentioning, but the quote above illustrates that it might not be all that clear. Court decisions, as most of you know, are written by judges who take into account the facts and apply the law to the facts of the case, along with other facts and information that may have a bearing on the case. The source of the law includes statutes and of course previously decided cases, which enunciate rules or principles that the court either applies, distinguishes based on the facts as being inapplicable, or, in some cases, overturns (for any number of reasons). Court decisions are not, of course, published on Wikipedia and are not subject to the collective editing process of Wikipedia, nor should they be. Rather, references to Wikipedia in court cases are to provide additional or ancillary context or facts to a case. They do not and should not derogate from principles of law that are set forth in court decisions. But, contrary to what Mr. Ryesky, Esq., indicates above, I don’t think referring to Wikipedia for context or facts will suddenly undermine the foundations of law, since the legal reasoning itself still will and must be based on sources of law, not facts and not context.

Hence the following end to the NTY article:

Stephen Gillers, a professor at New York University Law School, saw this as crucial: “The most critical fact is public acceptance, including the litigants,” he said. “A judge should not use Wikipedia when the public is not prepared to accept it as authority.”

For now, Professor Gillers said, Wikipedia is best used for “soft facts” that are not central to the reasoning of a decision. All of which leads to the question, if a fact isn’t central to a judge’s ruling, why include it?

“Because you want your opinion to be readable,” said Professor Gillers. “You want to apply context. Judges will try to set the stage. There are background facts. You don’t have to include them. They are not determinitive. But they help the reader appreciate the context.”

He added, “The higher the court the more you want to do it. Why do judges cite Shakespeare or Kafka?”

Exactly.

The Virtues and Evils of Open Source

Yes, I know, I’ve been behind lately. A ton of very interesting things to catch up on. But I’d like to put in one quick note about open source code. I recently came across an article, written last year by a lawyer, generally advising development companies not to use open source. I don’t quite recall where it was (if I did I’d link to it) but I do remember it being quite clear in stating that using open source is A Bad Thing and to avoid it altogether – not just to be careful, but rather to treat it as one would radioactive waste.

With respect, I don’t quite agree. I certainly advise my clients to take a great deal of caution in using open source code, particularly the GPL variety, and very particularly if they have a desire to keep some or all of their own secret, proprietary code secret and proprietary. That being said, I do have many, many clients who have used open source code to great advantage in various ways. Some have simply used existing open source code to avoid reinventing the wheel (and saving on costs), while taking care to keep viral elements out of their proprietary code. Others have been more aggressive with the open source model and have intentionally decided to use open source as the basis for their business model and making their very own code, or parts of it, either open source or subject to a dual-licensing model. As the Red Hats, JBosses, Sleepycats, MySQLs etc. etc. of the world have demonstrated, you can go open source and still have a pretty viable business. And, of course, there are the “old world” companies like IBM who have decided to go open source (in some limited ways – e.g. IBM’s DB2 Express-C thing).

Of course, this is not to suggest that anyone through caution to the wind and just start pulling down stuff from Sourceforge and whacking it into your product. Use of open source definitely requires some planning ahead and consideration of what the business model and value proposition of your business will be. Optimally, enlist the help of a lawyer who’s familiar with open source licenses to discuss what you plan to do and the packages you plan to use. Or, if that’s not feasible, try at least to read the applicable licenses yourself and ensure you comply with them, because if you don’t think that anyone will notice, or that no one will actually sue you, you may want to pay a visit to the GPL Violations Site and reconsider, in addition to the questions that will be asked of you when the due diligence starts on your next round of financing or, even worse, your (aborted) exit event. Can badly managed open source usage (and I emphasize badly managed, not simply open source usage) kill a deal? Definitely.

In short – I don’t think open source is necessarily a bad thing. In fact, it can be a pretty good thing, not just in the social good sense and all that, but also as a business. But it need to be used taking into account its terms of use and ensuring that its consistent with the strategy you plan to take.

If perhaps there’s one thing I’d recommend it would be for shops to make absolutely sure they have a disciplined approach in tracking where code comes from and the terms under which its being used and why its being used. That applies not only to open source stuff, but also, for example, your programmers taking neat snippets of code from Dr. Dobbs or something else, or coming across a nice little script somewhere on the Web and saying “Gee, that’s neat, let’s use it in our product”.

Anyway, if I remember where the article was I’ll update this to include a link.

Wikiality

Interesting post on the Wellington Financial Blog about “Wikiality” – the practice of taking stuff in Wikipedia as the truth, or, to quote: ““a reality where, if enough people agree with a notion, it becomes the truth.”

JN notes that Wikipedia has been cited by the courts, and this is reason for concern. A snippet:

The practice poses two problems:

  1. The references may be inaccurate; and
  2. Even if accurate, the references are subject to change at any point in the future, making it difficult for any future decisions to refer back to the original or understand the context in which it was made.

Given recent reports of Microsoft offering to pay individuals to make changes to certain Wikipedia articles in which they have a vested interest, the credibility of the site as a definitive reference source again comes into question.

A few of my colleagues at the firm also expressed bemusement when a recent case in Ontario (don’t have the citation, sorry) also cited Wikipedia.

I am quite a big fan of Wikipedia. It is, I think a rather useful and handy tool to refer to from time to time. Do I take it as the gospel? No. Would I use it if I were trying to concoct an antidote for a poison that was about to kill me? Probably not. Would I cite it in a legal research paper? Possibly. In fact, quite likely.

Although Wikipedia is by no means without its weaknesses, it also has its strengths. Sure, there is a possibility of inaccuracy. But then again, isn’t something less likely to have inaccuracies if it is reviewed (and edited) by more eyes (and more minds)? Isn’t it more likely that if there is a dispute about what is and isn’t correct, it will come to light, just like the Microsoft incident?

And what source, can it be said, is free of inaccuracies? Certainly not The New York Times. Although the Gray Lady is quick to point out that it was “deceived” by an errant reporter, it is less quick to reflect on the fact that it published fabricated stories. That of course is the clearest example, but history is rife with examples of inaccurate or misleading stories in the press. Less clear, of course, is media bias. And one only needs to refer to Manufacturing Consent. I don’t necessarily agree with all that book has to offer, but it certainly provides some food for thought.

What about scientific publications? Hmmm. Well. Again, truth is quite often relative. The clearest examples, are, of course, outright fabrication. Nonetheless, Dr. Hwang Woo-suk’s paper on producting the first cloned stem cell line was considered the truth for several years, until he was discredited. And more generally speaking, is it not true that, in the world of science, what is considered to be the truth is what most scientists believe to be true? Is that not the system of peer review? A great read on this topic is The Structure of Scientific Revolutions (as an aside, its also the book that introduced the phrase “paradigm shift” into popular parlance). I won’t bore you with details, but suffice it to say that, at the end of the day, science, at least in concept, may not be that far from wikiality.

My point isn’t necessarily to skewer existing sources of “truth” but rather to point out that such sources aren’t necessarily more reliable or accurate, or less fallible, than something like Wikipedia.

And as for things changing? Make a copy.