the internet: how not to learn to commit crimes

A story in the the Daily Record. The phrase “the thing speaks for itself” (which is one of those handy latin phrases I learned in law school but almost never use, except of course in blog posts – res ipsa loquitur, for you latinphiles out there…) seems to be appropriate for this:

At exactly 5:45:34 on April 18, 2004 a computer taken from the office of the attorney of Melanie McGuire, did a search on the words “How To Commit Murder.”

That same day searches on Google and MSN search engines, were conducted on such topics as `instant poisons,` `undetectable poisons,’ ‘fatal digoxin doses,’ and gun laws in New Jersey and Pennsylvania.

Ten days later, according to allegations by the state of New Jersey, McGuire murdered her husband, William T. McGuire, at their Woodbridge apartment, using a gun obtained in Pennsylvania, one day after obtaining a prescription for a sedative known as the “date rape” drug.

As a married man, it also makes me wonder what exactly is it about divorce that is really so bad that people resort to the apparently more preferable alternative of brutally murdering their spouses (as I delicately knock on wood…).

Via Slashdot.

silly lawsuit of the week

OK. Short version of the story in InformationWeek: Woman puts up a website. She puts a “webwrap” agreement at the bottom – i.e. basically a contract that says if you use the site then you agree to the contract. Still some question as to whether such a mechanism is binding, but anyway…

So the Internet Archive of course comes along and indexes her site. Which apparently is a violation of the webwrap. So she sues, representing herself, I believe. The court throws out everything on a preliminary motion by IA except for the breach of contract.

InformationWork observes that “Her suit asserts that the Internet Archive’s programmatic visitation of her site constitutes acceptance of her terms, despite the obvious inability of a Web crawler to understand those terms and the absence of a robots.txt file to warn crawlers away.” (my emphasis). They then conclude with this statement:

If a notice such as Shell’s is ultimately construed to represent just such a “meaningful opportunity” to an illiterate computer, the opt-out era on the Net may have to change. Sites that rely on automated content gathering like the Internet Archive, not to mention Google, will have to convince publishers to opt in before indexing or otherwise capturing their content. Either that or they’ll have to teach their Web spiders how to read contracts.

(my emphasis).

They already have – sort of. It’s called robots.txt – the thing referred to above. For those of you who haven’t heard of this, its a little file that you put on the top level of your site and which is the equivalent of a “no soliciation” sign on your door. Its been around for at least a decade (probably longer) and most (if not all) search engines

From the Internet Archive’s FAQ:

How can I remove my site’s pages from the Wayback Machine?

The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.

Internet Archive uses the exclusion policy intended for use by both academic and non-academic digital repositories and archivists. See our exclusion policy.

You can find exclusion directions at exclude.php. If you cannot place the robots.txt file, opt not to, or have further questions, email us at info at archive dot org.

standardized methods of communications – privacy policies, etc. – more. Question is, will people be required to use it, or simply disregard and act dumb?

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…


ALPR is….

short for Automatic License Plate Recognition. Sometimes I find mention of the most interesting things in the most unexpected places. Like this brief article on how police in British Columbia are currently using a system that can easily and quickly scan license plate numbers as they drive along that I saw in bookofjoe. Surprised I didn’t see see it anywhere else, oddly enough, particularly given the implications for privacy, etc. Not necessarily that there are any – after all, license plates are there so that they can be seen by the public at large and police officers. That being said, I find it interesting how the application of new technology (optical recognition) to old technology (license plates), significantly alters the implications of how the old technology is perceived.

Sure, its one thing to have police on the lookout for a particular license plate on a car with a known felon who is escaping, but it seems to be quite another for a police car to scan and process thousands upon thousands of license plates while driving around the city.

Wikiality – Part III

Bit of an elaboration on a previous post on the use of Wikipedia in judgements. I cited part of a New York Times article, which had in turn quoted from a letter to the editor from Professor Kenneth Ryesky. The portion cited by the NYT article suggested that Ryesky was quite opposed to the idea, which wasn’t really the case. He was kind enough to exchange some thoughts via e-mail:

In his New York Times article of 29 January 2007, Noam Cohen quoted a sentence (the last sentence) from my Letter to the Editor published in the New York Law Journal on 18 January 2007. You obviously read Mr. Cohen’s article, but it is not clear whether you read the original Letter to the Editor from which the sentence was quoted.

Which exemplifies the point that Wikipedia, for all of its usefulness, is not a primary source of information, and therefore should be used with great care in the judicial process, just as Mr. Cohen’s article was not a primary source of information.

Contrary to the impression you may have gotten from Mr. Cohen’s New York Times article of 29 January, I am not per se against the use of Wikipedia. For the record, I myself have occasion to make use of it in my research (though I almost always go and find the primary sources to which Wikipedia directs me), and find it to be a valuable tool. But in research, as in any other activity, one must use the appropriate tool for the job; using a sledge hammer to tighten a little screw on the motherboard of my computer just won’t work.

Wikipedia and its equivalents present challenges to the legal system. I am quite confident that, after some trial and error, the legal system will acclimate itself to Wikipedia, just as it has to other text and information media innovations over the past quarter-century.

Needless to say, quite a different tone than the excerpt in the NYT article. Thanks for the clarification, Professor Ryesky.

ITAC – First Canadian Municipal Wireless Conference and Exhibition

Wow – lots happening the last week of May. Also forgot to mention previously the First Canadian Municipal Wireless Conference and Exhibition being organized by ITAC at the Direct Energy Conference Centre at the Canadian National Exhibition in Toronto, May 28-30, 2007:

Whether you live or work in a large urban municipality, a small rural town or village, the impact of wireless applications has already or will soon impact the quality of your life and the services you offer your community. If your organization engages in digital electronic services to customers, e.g., taxpayers, suppliers, emergency service providers, other levels of government, non-profit organizations and associations, you need to learn about the latest proven strategies to ensure the success of your wireless programs.

ITAC’s 1st Canadian Municipal Wireless Applications Conference and Exhibition will not only update you on the latest initiatives of Canadian Municipalities, but will provide you with real case study insights, proven strategies, commentary from leading wireless experts and techniques for deploying wireless applications in your communities. If you are currently engaged, or plan to be engaged, in a municipal wireless project, your attendance at this event is essential.

Thoughts on Quantum Computing

Interesting article in Wired News where they interview David Deutsch who they refer to as the Father of Quantum Computing. He has a kind of low key but interesting take on the recent demonstration of a real, live 16 qubit quantum computer by D-Wave, a Canadian company based out of Vancouver.

Low key insofar as he doesn’t seem particularly enthused about the potential of quantum computers, other than perhaps their ability to be used to simulate quantum systems and of course encryption:

Deutsch: It’s not anywhere near as big a revolution as, say, the internet, or the introduction of computers in the first place. The practical application, from a ordinary consumer’s point of view, are just quantitative.

One field that will be revolutionized is cryptography. All, or nearly all, existing cryptographic systems will be rendered insecure, and even retrospectively insecure, in that messages sent today, if somebody keeps them, will be possible to decipher … with a quantum computer as soon as one is built.

Most fields won’t be revolutionized in that way.

Fortunately, the already existing technology of quantum cryptography is not only more secure than any existing classical system, but it’s invulnerable to attack by a quantum computer. Anyone who cares sufficiently much about security ought to be instituting quantum cryptography wherever it’s technically feasible.

Apart from that, as I said, mathematical operations will become easier. Algorithmic search is the most important one, I think. Computers will become a little bit faster, especially in certain applications. Simulating quantum systems will become important because quantum technology will become important generally, in the form of nanotechnology.

(my emphasis). Interesting thought about being retrospectively insecure. Particularly given spy agencies have, in the past, been sufficiently bold to transmit encoded messages on easily accessible shortwave frequencies.

I imagine the spook shops already have their purchase orders in for quantum crypto stuff (or have developed it already internally). Was a bit surprised by the statement above regarding existing technology for quantum computing. I had heard of some demos a while back, but didn’t realize that there are actually several companies offering quantum cryptography products.

Virtual Diplomacy

Short one as its getting late. Interesting piece on how Sweden is setting up an embassy in Second Life. As most of you know, Second Life is a MMORPG – a virtual world of sorts where people can control computer generated images of people in a virtual world.

That being said, somewhat less exciting than first blush, as the new virtual Swedish embassy will only provide information on visas, immigration, etc. Perhaps not surprising – I mean, its not like you should be able to get a real-world passport through the use of your virtual character. Nor, God forbid, do I hope they’re introducing the bureaucracy of passports to travel through virtual countries….

Wikiality – Part II

There was some traffic on the ULC E-Comm Listserv (on which I surreptitiously lurk – and if you don’t know what it is and are interested in e-commerce law, highly recommended) about courts citing Wikipedia with a couple of links to some other stuff, including an article on Slaw as well as an article in the New York Times about the concerns raised by some regarding court decisions citing Wikipedia. Some excerpts and notes to expand on my previous post:

From the con side:

In a recent letter to The New York Law Journal, Kenneth H. Ryesky, a tax lawyer who teaches at Queens College and Yeshiva University, took exception to the practice, writing that “citation of an inherently unstable source such as Wikipedia can undermine the foundation not only of the judicial opinion in which Wikipedia is cited, but of the future briefs and judicial opinions which in turn use that judicial opinion as authority.”

This raises a good point that I didn’t mention in my previous post. I certainly think Wikipedia is fine to note certain things, but I really, definitely, positively, do not think that it should be cited as judicial authority. In my previous article I thought this was so self-evident I didn’t bother mentioning, but the quote above illustrates that it might not be all that clear. Court decisions, as most of you know, are written by judges who take into account the facts and apply the law to the facts of the case, along with other facts and information that may have a bearing on the case. The source of the law includes statutes and of course previously decided cases, which enunciate rules or principles that the court either applies, distinguishes based on the facts as being inapplicable, or, in some cases, overturns (for any number of reasons). Court decisions are not, of course, published on Wikipedia and are not subject to the collective editing process of Wikipedia, nor should they be. Rather, references to Wikipedia in court cases are to provide additional or ancillary context or facts to a case. They do not and should not derogate from principles of law that are set forth in court decisions. But, contrary to what Mr. Ryesky, Esq., indicates above, I don’t think referring to Wikipedia for context or facts will suddenly undermine the foundations of law, since the legal reasoning itself still will and must be based on sources of law, not facts and not context.

Hence the following end to the NTY article:

Stephen Gillers, a professor at New York University Law School, saw this as crucial: “The most critical fact is public acceptance, including the litigants,” he said. “A judge should not use Wikipedia when the public is not prepared to accept it as authority.”

For now, Professor Gillers said, Wikipedia is best used for “soft facts” that are not central to the reasoning of a decision. All of which leads to the question, if a fact isn’t central to a judge’s ruling, why include it?

“Because you want your opinion to be readable,” said Professor Gillers. “You want to apply context. Judges will try to set the stage. There are background facts. You don’t have to include them. They are not determinitive. But they help the reader appreciate the context.”

He added, “The higher the court the more you want to do it. Why do judges cite Shakespeare or Kafka?”

Exactly.

The Virtues and Evils of Open Source

Yes, I know, I’ve been behind lately. A ton of very interesting things to catch up on. But I’d like to put in one quick note about open source code. I recently came across an article, written last year by a lawyer, generally advising development companies not to use open source. I don’t quite recall where it was (if I did I’d link to it) but I do remember it being quite clear in stating that using open source is A Bad Thing and to avoid it altogether – not just to be careful, but rather to treat it as one would radioactive waste.

With respect, I don’t quite agree. I certainly advise my clients to take a great deal of caution in using open source code, particularly the GPL variety, and very particularly if they have a desire to keep some or all of their own secret, proprietary code secret and proprietary. That being said, I do have many, many clients who have used open source code to great advantage in various ways. Some have simply used existing open source code to avoid reinventing the wheel (and saving on costs), while taking care to keep viral elements out of their proprietary code. Others have been more aggressive with the open source model and have intentionally decided to use open source as the basis for their business model and making their very own code, or parts of it, either open source or subject to a dual-licensing model. As the Red Hats, JBosses, Sleepycats, MySQLs etc. etc. of the world have demonstrated, you can go open source and still have a pretty viable business. And, of course, there are the “old world” companies like IBM who have decided to go open source (in some limited ways – e.g. IBM’s DB2 Express-C thing).

Of course, this is not to suggest that anyone through caution to the wind and just start pulling down stuff from Sourceforge and whacking it into your product. Use of open source definitely requires some planning ahead and consideration of what the business model and value proposition of your business will be. Optimally, enlist the help of a lawyer who’s familiar with open source licenses to discuss what you plan to do and the packages you plan to use. Or, if that’s not feasible, try at least to read the applicable licenses yourself and ensure you comply with them, because if you don’t think that anyone will notice, or that no one will actually sue you, you may want to pay a visit to the GPL Violations Site and reconsider, in addition to the questions that will be asked of you when the due diligence starts on your next round of financing or, even worse, your (aborted) exit event. Can badly managed open source usage (and I emphasize badly managed, not simply open source usage) kill a deal? Definitely.

In short – I don’t think open source is necessarily a bad thing. In fact, it can be a pretty good thing, not just in the social good sense and all that, but also as a business. But it need to be used taking into account its terms of use and ensuring that its consistent with the strategy you plan to take.

If perhaps there’s one thing I’d recommend it would be for shops to make absolutely sure they have a disciplined approach in tracking where code comes from and the terms under which its being used and why its being used. That applies not only to open source stuff, but also, for example, your programmers taking neat snippets of code from Dr. Dobbs or something else, or coming across a nice little script somewhere on the Web and saying “Gee, that’s neat, let’s use it in our product”.

Anyway, if I remember where the article was I’ll update this to include a link.