when not to use technology

I came across a link to a story where a South African company was using homing pigeons to transport data because it was faster than their broadband connection:

Workers will attach a memory card containing the data to bird’s leg and let nature take its course.

Experts believe the specially-trained 11-month-old pigeon will complete the flight in just 45 minutes – and at a fraction of the cost.

To send four gigabytes of encrypted information takes around six hours on a good day. If we get bad weather and the service goes down then it can up to two days to get through.

If you’re curious, doing the math on that works out to roughly 1.5 Mbps for the broadband connection and, if a 4GB card is used with the pigeon, just under 12 Mbps for the pigeon.

Of course, such a solution isn’t without risk:

‘With modern computer hacking, we’re confident well-encrypted data attached to a pigeon is as secure as information sent down a phone line anyway.

‘There are other problems, of course. Winston [the pigeon] is vulnerable to the weather and predators such as hawks. Obviously he will have to take his chances, but we’re confident this system can work for us.’

Though the story is amusing, the point it reinforces is I think a helpful one – namely, that the use of particular technology might not necessarily be the best solution to a business problem. It may just be due to the area I work in, but I have seen instances where organizations are so focused on the use of technology (or in some cases a particular type of technology) that they don’t consider alternatives that may achieve their goals better, cheaper or faster.

Certainly not necessarily advocating the widespread use of PigeonNets, but the story is an amusing example of someone overcoming the law of the golden hammer.

mozilla prism

Prism is a very interesting little development that the Mozilla folks are working on. Don’t recall where I read about it – probably slashdot. The nub:

Prism is an application that lets users split web applications out of their browser and run them directly on their desktop.

With an illustration that neatly captures the reason for the name and functionality:

I haven’t yet tried it myself but find the concept of further blurring the distinction between the network or server and the local machine quite intriguing.

Fair Use and the DMCA

An article in Wired News with the dramatic title of “Lawmakers Tout DMCA Killer” describes the most recent attempt to: (a) water down the protections afforded to content owners by the DMCA; (b) ensure the preservation of fair use rights on the part of users. As is usual, each side has its own rhetoric to describe what is happening, so in fairness I took the liberty of offering to readers of this blog the two alternative descriptions above. The nub:

The Boucher and Doolittle bill (.pdf), called the Fair Use Act of 2007, would free consumers to circumvent digital locks on media under six special circumstances.

Librarians would be allowed to bypass DRM technology to update or preserve their collections. Journalists, researchers and educators could do the same in pursuit of their work. Everyday consumers would get to “transmit work over a home or personal network” so long as movies, music and other personal media didn’t find their way on to the internet for distribution.

And then of course on the other side:

“The suggestion that fair use and technological innovation is endangered is ignoring reality,” said MPAA spokeswoman Gayle Osterberg. “This is addressing a problem that doesn’t exist.”

Osterberg pointed to a study the U.S. Copyright Office conducts every three years to determine whether fair use is being adversely affected. “The balance that Congress built into the DMCA is working.” The danger, Osterberg said, is in attempting to “enshrine exemptions” to copyright law.

To suggest that content owners have the right to be paid for their work is, for me, a  no-brainer. That being said, I wonder whether the DMCA and increasingly more complex and invasive DRM schemes will ultimately backfire – sure they protect the content, but they sure as heck are a pain in the ass – just my personal take on it. For example, I’d love to buy digital music, but having experienced the controls that iTunes imposes and suddenly having all my tracks disappear, I just don’t bother with it now. Not to mention the incredible hoops one needs to go through to display, say, Blu-ray on a computer – at least in its original, non-downgraded resolution – why bother with all of that at all?

I wonder whether this is, in a way, history repeating itself in a way. I am old enough to remember the early days of software protection – virtually every high-end game or application used fairly sophisticated techniques (like writing non-standard tracks on floppies in between standard tracks) in attempting to prevent piracy. Granted, these have never gone away altogether, particularly for super high end software that needs dongles and and the like, and of course recently there has been a resurgence in the levels of protection that have been layered on in Windows, but after the initial, almost universal lockdown of software long ago, there came a period where it seemed many (if not most) software developers just stopped using such measures.  At least that’s what seemed to happen. I’m not quite sure why, but I wonder if this same pattern will repeat with content rather than software. I suspect not. But hey, you never know.

In the meantime, off I go, reluctantly, in the cold, cold winter, to the nearest record shop to buy music the old fashioned way…