the internet: how not to learn to commit crimes

A story in the the Daily Record. The phrase “the thing speaks for itself” (which is one of those handy latin phrases I learned in law school but almost never use, except of course in blog posts – res ipsa loquitur, for you latinphiles out there…) seems to be appropriate for this:

At exactly 5:45:34 on April 18, 2004 a computer taken from the office of the attorney of Melanie McGuire, did a search on the words “How To Commit Murder.”

That same day searches on Google and MSN search engines, were conducted on such topics as `instant poisons,` `undetectable poisons,’ ‘fatal digoxin doses,’ and gun laws in New Jersey and Pennsylvania.

Ten days later, according to allegations by the state of New Jersey, McGuire murdered her husband, William T. McGuire, at their Woodbridge apartment, using a gun obtained in Pennsylvania, one day after obtaining a prescription for a sedative known as the “date rape” drug.

As a married man, it also makes me wonder what exactly is it about divorce that is really so bad that people resort to the apparently more preferable alternative of brutally murdering their spouses (as I delicately knock on wood…).

Via Slashdot.

silly lawsuit of the week

OK. Short version of the story in InformationWeek: Woman puts up a website. She puts a “webwrap” agreement at the bottom – i.e. basically a contract that says if you use the site then you agree to the contract. Still some question as to whether such a mechanism is binding, but anyway…

So the Internet Archive of course comes along and indexes her site. Which apparently is a violation of the webwrap. So she sues, representing herself, I believe. The court throws out everything on a preliminary motion by IA except for the breach of contract.

InformationWork observes that “Her suit asserts that the Internet Archive’s programmatic visitation of her site constitutes acceptance of her terms, despite the obvious inability of a Web crawler to understand those terms and the absence of a robots.txt file to warn crawlers away.” (my emphasis). They then conclude with this statement:

If a notice such as Shell’s is ultimately construed to represent just such a “meaningful opportunity” to an illiterate computer, the opt-out era on the Net may have to change. Sites that rely on automated content gathering like the Internet Archive, not to mention Google, will have to convince publishers to opt in before indexing or otherwise capturing their content. Either that or they’ll have to teach their Web spiders how to read contracts.

(my emphasis).

They already have – sort of. It’s called robots.txt – the thing referred to above. For those of you who haven’t heard of this, its a little file that you put on the top level of your site and which is the equivalent of a “no soliciation” sign on your door. Its been around for at least a decade (probably longer) and most (if not all) search engines

From the Internet Archive’s FAQ:

How can I remove my site’s pages from the Wayback Machine?

The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.

Internet Archive uses the exclusion policy intended for use by both academic and non-academic digital repositories and archivists. See our exclusion policy.

You can find exclusion directions at exclude.php. If you cannot place the robots.txt file, opt not to, or have further questions, email us at info at archive dot org.

standardized methods of communications – privacy policies, etc. – more. Question is, will people be required to use it, or simply disregard and act dumb?

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…


Thoughts on Quantum Computing

Interesting article in Wired News where they interview David Deutsch who they refer to as the Father of Quantum Computing. He has a kind of low key but interesting take on the recent demonstration of a real, live 16 qubit quantum computer by D-Wave, a Canadian company based out of Vancouver.

Low key insofar as he doesn’t seem particularly enthused about the potential of quantum computers, other than perhaps their ability to be used to simulate quantum systems and of course encryption:

Deutsch: It’s not anywhere near as big a revolution as, say, the internet, or the introduction of computers in the first place. The practical application, from a ordinary consumer’s point of view, are just quantitative.

One field that will be revolutionized is cryptography. All, or nearly all, existing cryptographic systems will be rendered insecure, and even retrospectively insecure, in that messages sent today, if somebody keeps them, will be possible to decipher … with a quantum computer as soon as one is built.

Most fields won’t be revolutionized in that way.

Fortunately, the already existing technology of quantum cryptography is not only more secure than any existing classical system, but it’s invulnerable to attack by a quantum computer. Anyone who cares sufficiently much about security ought to be instituting quantum cryptography wherever it’s technically feasible.

Apart from that, as I said, mathematical operations will become easier. Algorithmic search is the most important one, I think. Computers will become a little bit faster, especially in certain applications. Simulating quantum systems will become important because quantum technology will become important generally, in the form of nanotechnology.

(my emphasis). Interesting thought about being retrospectively insecure. Particularly given spy agencies have, in the past, been sufficiently bold to transmit encoded messages on easily accessible shortwave frequencies.

I imagine the spook shops already have their purchase orders in for quantum crypto stuff (or have developed it already internally). Was a bit surprised by the statement above regarding existing technology for quantum computing. I had heard of some demos a while back, but didn’t realize that there are actually several companies offering quantum cryptography products.

Rapleaf

Interesting article on Techcrunch about a company called Rapleaf. The nub:

Rapleaf will allow anyone to leave feedback for anyone they’ve transacted with. Others can use this feedback to help them determine if they are doing business with someone who’d likely to engage in fraud. Rapleaf is eBay feedback for the rest of the web, and the offline world.

Very interesting idea. Of course, there have been various solutions that people have tried to address the curse (and perhaps sometimes blessing) that, on the internet, no one knows if you’re a dog. I always thought encryption and the whole public key infrastructure thing would go somewhere, you know, with PGP and all being used, then of course the various bodies around the world setting up certification authorities, and then related legislation, etc. etc. That could have solved a lot of problems, including, amongst others, spam. And of course fraud. Surprisingly enough it never got off the ground all that well and in its stead we find reputational markers such as this.

Interesting how the internet has enabled the scaling of these sorts of reputational mechanisms. Where it was once a couple of neighbours chatting about the best butcher, its now millions of folks spread across dozens of countries having their opinions on thousands (or more) vendors. Talk about network effects.