Pages

Friday 22 August 2008

Windows XP SP3: avoid possible problems - AMD processors, HP etc






If you're on Windows XP and haven't yet taken the plunge to upgrade to XP Service Pack 3 despite the urgings of Microsoft's automatic Windows updates, it's probably worth doing it (e.g. to possibly improve battery life on laptops - see Microsoft's list of other SP3 fixes) - but, first, there's one gotcha with XP SP 3 that I want to highlight.

If your PC has an AMD processor (or possibly even other non-Intel processors), then you ought to check things out further before upgrading, because some people with AMD based computers (notably certain HP desktops) have had problems with continual rebooting / restarting of their computers, or a STOP: 0x0000007E error message etc after they tried to update their systems to SP3.

Here's a tip on that front: Jesper Johansson has provided a very useful free tool to download and doubleclick on your AMD computer to check if your particular PC suffers from this problem and, if it does, it offers to help you fix it.

Alternatively, install the KB953356 update direct from Microsoft before you try to install XP SP3.

If it's too late, if you've already tried to install XP SP 3 and been hit by what appears to be this problem, don't despair yet, you may still be able to fix it after the event - you could as part of your troubleshooting try the workarounds suggested by Microsoft involving disabling the intelppm driver.

For how to prevent or fix other potential issues with AMD motherboards and avoid other possible problems with upgrading to ServicePack 3, see Jesper's excellent post.

Thursday 21 August 2008

Internet dating: better meeting online than in real life?







New Scientist reports: "You're now more likely to find your true love on the internet than at work or at a party, in the US at least - especially if you're over 45."

That comment was made in an article about an online survey of over 10,000 people aged between 20 and 54 who got married in the USA between 1 April 2006 and 31 March 2007, conducted by Harris Interactive for online matchmaking service eHarmony which claims to use scientific methods to work out compatibility (eHarmony press release).

The survey was of course meant to highlight how many couples who got married met via eHarmony, but the New Scientist article mentions more generally statistics (presumably based on the full report, which hasn't been made available publicly) showing that, of the couples surveyed:
  • 19% met online (compared with 14 % in a previous similar survey of marriages between 2004 and 2005)
  • 17% met at work (down from 20%)
  • 17% met through friends (unchanged).

31% of the married couples aged 45 - 54 met online, compared with 18% of 20 - 44-year-olds who did, suggesting (which I think makes sense) that younger people have more ways than older people to meet potential partners, e.g. through college.

But still, that's a decent percentage of couples of any age marrying who met online. So, Ian, quite right - there's absolutely no shame in being on dating sites or looking for relationships online!

(For anyone interested: how to write the perfect internet dating profile?; the secret to successful chat-up lines.)

Vista: hotkeys to shut down Vista PC






An annoyance with Windows Vista for keyboard shortcut fans like me is that the old XP hot keys don't work to shutdown or power down a Vista computer.

Here are tips for how to power off or shut down a Vista PC via keyboard shortcuts; there are a few ways:
  1. Using Ctrl Alt Delete
    1. Ctrl-Alt-Delete (i.e. hold down the Ctrl, Alt and Del keys at the same time)
    2. Alt-s (i.e. hold down Alt key then tap the s key) [or Shift-tab then Enter]
    3. s key twice then Enter [or Up arrow (↑) cursor key then Enter].

  2. Via the start menu
    1. Ctrl-Esc or the Win key (to call up the start menu)
    2. Right arrow (→) cursor key three times
    3. Enter / Return or the u key.

Also see:
Tags: , , , , , , , ,

Thursday 14 August 2008

UK corporate insolvency jargon: guide / briefing for journalists etc






Administration, winding up, receivership... they're not the same thing. In the current "credit crunch" climate there's lots in the media about companies going bust, sadly. But there's going bust - and there's going bust.

Some journalists seem to constantly mix things up, use the wrong terms, etc, sometimes even in the same sentence - which irritates me almost as much as people pronouncing "Beijing" (and other Chinese words) as if it were French. In this case it's probably not their fault, the poor dears; I suspect many of them have never been briefed properly.

So for a change, instead of trying to demystify technology I'm going to try to demystify bits of insolvency jargon. As it applies to UK companies only, which is big enough a task! (No cross-border stuff, no non-companies, just corporate insolvency.) If you know any mainstream business journalists who might be interested, please point them to this post.

Bankruptcy

Bankruptcy is only for humans - individuals, people. When an individual officially becomes bankrupt, a trustee in bankruptcy is appointed to realise (sell off) their assets and pay off their debts. In the UK companies can't go "bankrupt", though of course we do informally say "This is gonna bankrupt company X", or "Company X has gone bankrupt".

Outside the UK, companies can officially go bankrupt, depending on where - e.g. in the USA.

And the rules on insolvencies can be different for different types of entities depending on whether they're individuals, companies, partnerships, limited partnerships, limited liability partnerships (LLPs) etc. And that's just for entities set up in the UK, never mind elsewhere! I'm not even going to try to mention anything except companies.

Liquidation

Winding up = liquidation. Those terms you can interchange, at least in the UK.

Winding up is pretty much the end of the line for a company, whether or not it's bust (note that a perfectly solvent company could be wound up e.g. because it's served its purpose). The company winds down, or gets wound up, same difference - it stops trading, its assets are collected in and sold, and its creditors are paid - maybe in full, or more likely they'll only get a percentage of what they were owed if the assets weren't enough to cover all the company's debts in full.

All this is done by a liquidator appointed especially for the task, who's entitled to charge fees for their work and recover expenses from the asset sale proceeds before the creditors are paid their shares. Creditors put in their claims or "proofs of debt" to the liquidator, who decides whether or not to admit the claims as valid and eligible for a share.The "pari passu" principle means that unsecured creditors get paid in proportion to the size of the debts owed to them - so if there's a payment or "dividend" of 1p in the pound, then someone who's owed £1 gets 1p, someone owed £8 gets 8p, someone owed £10 gets 10p and so on. (In the UK only licensed "insolvency practitioners" or "IPs" can be appointed as liquidators. Yes, that means taking exams, having suitable experience, no convictions or discreditable behaviour etc. Most IPs in the UK are accountants, though there are a few IPs who are lawyers.)

Notice that I said unsecured creditors only get a share, pro rata to their proved and admitted claims, of what's available for distribution to creditors generally. Secured creditors should do much better. If a creditor has taken security from the debtor over a particular asset (e.g. a building owned by the company), the most commonly known form being a mortgage or charge, then the creditor basically has first dibs to that property or asset. It can control its realisation: take "possession" of it, sell the asset as mortgagee, or appoint a "receiver" to collect income (e.g. rent payable by tenants of the building), manage it and/or help sell it, and use the income/proceeds to pay itself off and also meet the expenses of sale and the receiver's fees etc. (And if the net sale proceeds are still not enough to meet the debt owed to the secured creditor, it can "prove" i.e. claim against the debtor for what's still owing, but on an unsecured basis only - it has to take its place in the queue along with the unsecured creditors for the balance.)

If there's anything left after all liquidation expenses are met and all the creditors are paid out in full, the surplus money will be divvied up amongst the company's shareholders, and the company is then dissolved, it is an ex-company, it is no more, it's a dead company, it ceases to exist, etc etc. (Unlike with people, who fortunately don't get dissolved or killed off after their bankruptcy creditors are paid off. Most of the time, anyway.)

There's 2 types of liquidation - voluntary and compulsory. Compulsory winding up is the type initiated by a winding up petition made to the court, e.g. an unpaid creditor; the court hears the petition and decides whether or not to make a winding up order, and if it does a liquidator is appointed. A voluntary liquidation is, wouldja believe, initiated voluntarily by the company itself (or its directors or shareholders).

What's a provisional liquidator? A liquidator who's appointed by the court before the winding up petition is heard, normally only in an emergency situation e.g. massive fraud on the part of the company, danger of assets being spirited away etc - because normally there's a time gap between the petition and the actual hearing.

Receivership

Only a secured creditor can appoint a receiver. No secured creditor, no receivership. A company could theoretically be in liquidation and receivership at the same time - remember, the receiver only has control of the assets over which they were appointed (and over which the company gave security - a debtor doesn't need to give security over all its property, it can be selective about it, though some banks may not be willing to lend to some companies unless they give the bank security over everything lock stock and barrel so in practice if they need to borrow money some companies may have no choice about it).

Ah, you'll say, but what about "administrative receivers"? Who they? What's the diff? Administrative receivers are a particular kind of receiver but they're now an increasingly rare breed. A creditor who has security over the whole (or substantially the whole) of a company's assets including what's known as a floating charge (typically under what's called a "debenture" or "mortgage debenture"), is able to appoint an administrative receiver who has wide powers to run the company's business and sell its assets and ultimately hopefully pay off the appointing creditor. (The Wikipedia article on administrative receivers isn't quite right on who / how the appointment can be made.)

However, an admin receiver can now be appointed only in very few situations (cut down by the Enterprise Act 2002 which introduced administration, covered below), so you'll only hear of them in relation to companies which gave the appropriate kind of security before 15 September 2003, or in one of those exceptional cases (like the "capital markets exception" aka "capital markets exemption"). Again, only IPs can act as administrative receivers.

A "fixed charge receiver" or "LPA receiver" is just a receiver who's not an admin receiver - i.e. a receiver who's been appointed over limited specific assets of the company (rather than all the company's business and assets). They don't have to be licensed IPs, e.g. surveyors often act as receivers of land / buildings.

What are "official receivers" then? Just to confuse matters, they're nothing to do with secured creditors. They're civil servants. Think of them a bit like public defenders for people accused of crimes in the US - they're appointed to act as liquidator or trustee in bankruptcy for debtors who basically can't afford an insolvency practitioner of their own, i.e. for relatively low value estates.

Administration

Now on to administration. This insolvency procedure was introduced in the UK to try to promote a culture of rescuing companies to get them back on their feet and preserve jobs rather than, to put it at its most extreme, selling off companies' assets at very low fire sale prices and then just closing it all down (including letting employees go).

The key point is that when a company goes into administration it gains a breathing space - it can keep on trading but there's what's known as a moratorium; creditors are banned from taking certain actions against it e.g. to petition for its winding up, repossess assets (including assets on HP), sell mortgaged property, send in bailiffs, etc (unlike in a liquidation, when mortgagees remain free to enforce their security). So companies can go into administration to get some protection from their creditors.

This gives the company time, with less firefighting of creditors etc to do, to work on a way for it to get out of its troubles with a clean slate, e.g. via a CVA (see below). Or maybe an administration, because of the ban on actions against the company, will enable its assets to be sold for better prices than if it went into liquidation, which would be better for its creditors. There's only a limited number of purposes for which a company can go into administration, in a certain pecking order - first and foremost of which is trying to rescue or salvage the company.

When a company goes into administration an administrator is appointed to run its business and to try to achieve the purposes of the administration and, again, the administrator has to be a qualified IP.

(The Wikipedia article on administration is by the way similarly wrong, or rather not 100% complete, on the appointment of an administrator.)

A secured creditor can block the appointment of an administrator by appointing an administrative receiver, but as mentioned above it's rare that an admin receiver can be appointed these days.

Company voluntary arrangements or CVAs

If a company enters into a CVA, it agrees some kind of arrangement or compromise with its creditors - typically, whereby they'll agree to accept less than the full amount due to them to settle the company's debts, with no further comeback against the company in future.

A couple of key points with CVAs. First, if at least 75% in value of eligible creditors vote in favour of the proposed CVA, all the creditors are stuck with it and have to take less than what's owed to them in full settlement (or whatever terms the CVA provides for) - even the creditors who voted against the proposals. This is sometimes known as a "cram down" of the creditors.

Second, CVAs are very rarely used - unless they're e.g. preceded by an administration (which you'll recall helps temporarily gives the company a moratorium), because companies in CVAs don't get any protection from creditors and are often discriminated against by suppliers and others. (A company qualifying as a "small company" can get a moratorium when proposing a CVA, but I'm not going to go into the small companies moratorium - please see the link below.)

The IP who is appointed to oversee the implementation of a CVA, pay money out under it etc, is known as a "supervisor" (and before a proposed CVA is approved by the required 75% of creditors, the future supervisor is known as a "nominee").

(There's no such thing as a "corporate voluntary arrangement" by the way, officially - that's just a mistaken reference, or let's call it slang, or one kind of popular usage. The correct term is "company voluntary arrangement".)

Schemes of arrangement

Schemes of arrangement are another way for a company to come to an arrangement with creditors and shareholders, with different classes each voting separately.

A company doesn't have to be insolvent to enter into a scheme of arrangement, it may just use one as the most efficient and cost-effective way to restructure, reconstruct or reorganise itself and its businesss / affairs. This kind of scheme has been quite popular e.g. with insurance companies.

Schemes of arrangement used to be known as section 425 schemes, but should now strictly be called section 895 schemes (under section 895 of the Companies Act 2006, which since April 2008 replaced section 425 of the Companies Act 1985).

Workouts, restructurings, rescues etc

A company, if it has only a few big creditors (e.g. banks), may be able to work something out with them without having to undergo a formal insolvency procedure, whereby the banks may agree to give it more time to pay - e.g. a rescheduling of payments (typically postponing payment of principal, reducing interest payments for a period or not paying interest altogether for a time, etc) while the company tries to trade out of its difficulties - or the banks might agree to swap their debt for equity (shares in the company), even lend more money to the company, etc.

The exact details of a workout or restructuring will vary with the situation of the company and its creditors - they will come to a private agreement amongst themselves. But the creditors may decide that agreeing to a work-out to try and help rescue the company is better for them than letting the company go down completely, as they may have more chance of getting more of their money back ultimately via a workout than in a liquidation, administration or receivership.

Some workouts work out, some don't; there have been situations where a workout has failed and the company then has to go into liquidation or administration.

International and cross border issues

I emphasise again that I'm not even going to try to explain any cross border stuff. With non-UK companies, and especially if international groups of companies that operate globally are involved, things can get very complicated and messy indeed. F'rinstance:
  • It's possible for non-UK companies to be subject to UK insolvency proceedings, e.g. if they have assets or creditors here, or foreign courts ask the UK courts for assistance in a foreign insolvency.

  • There are special rules depending on whether companies do or don't have their COMI or "centre of main interests" in the UK or elsewhere in the European Union (which the UK is part of, of course) - again, even if they're not UK (or indeed EU) companies (that's under the EU Insolvency Regulation aka Insolvency Proceedings Regulation Council Regulation (EC) No.1346/2000 if you must know...).

  • Companies can go bust elsewhere and if they're subject to insolvency procedures in a country outside the UK, the foreign IP equivalent may be able to apply for recognition in the UK and even get UK assets sent out of the UK to the other country (as the UK has adopted the UNCITRAL Model Law on Cross Border Insolvency - the USA have too, as Chapter 15, but don't assume that works similarly, nor that COMI under UNCITRAL is the same as COMI under the EUIR!).

And now...

Right, I hope that's all a bit clearer.

From now on, anyone who uses "administration", "receivership" and "liquidation" interchangeably should be reported to me for a very painful spanking (while being made to say "Beijing" with an English "j" one hundred times, just because)!

PS. very funny but telling items on the origins of the current credit crunch, if you've not seen them yet: sub-prime primer (cartoons), and what caused the subprime crisis.

Further information

See the UK Insolvency Service's publications:
Also see Companies House's Liquidation & Insolvency FAQs.

Disclaimer

This post is only meant to provide very general information on the position in the UK, and it's just a canter through a very complex area - it's nowhere near what you'd call comprehensive.

Nothing in this post is intended to be insolvency, accounting, financial or legal advice. If you need specific advice, everyone's individual situation is different and you ought to consult a suitably-qualified insolvency practitioner or insolvency lawyer.


Tuesday 12 August 2008

Recovering deleted files from memory cards, flash drives / thumb drives with Restoration






If you accidentally delete files from a USB flash drive / USB key / USB stick / thumb drive or a memory stick / memory card like an SD card (which you can open and read on your PC using a card reader), unfortunately they don't go into a Recycle Bin from which you can restore them.

Fortunately, there is a way to get deleted files back from a removable disk, if you act fast - i.e. before you try to write anything else to that flash drive or memory card, or indeed ideally before you do anything else with it.

Plug the thumb drive into your PC, or insert the memory card into the card reader and plug that in to your PC. Then try Brian Kato's excellent Restoration utility - which is a free little download that's been around for some years, and works on all versions of Windows before Vista, i.e. XP back to Windows 95.

It can restore files deleted from hard drives too - but in that case you're better off downloading it to a USB key or flash drive (rather than the affected hard drive) and running it from there, so as to try to preserve the deleted files on your hard drive (which should still be there, just invisible). Better still, why not download Restoration in advance onto a thumb drive, and then in future if you need it you can just plug it in and run it from there.

NB do NOT download Restoration to the drive, stick or card containing the accidentally deleted files - download it to another drive!

It's easy to use, but here's a walk through of how to use Restoration:


  1. After you've opened Restoration.exe from wherever you've downloaded it to, select the removable drive from which you want to recover deleted files.
  2. Click "Search Deleted Files".
  3. Normally I'd say "No" to "Do you want to scan vacant clusters?" (but if the search didn't turn up what you wanted, you might try Yes).

  4. The deleted files that it can detect are listed on the left. Simply select the files you want to restore by clicking on them, holding down the Ctrl key as you click if you want to select more than one file.
  5. Then click "Restore by Copying", and choose where on your computer (or external hard drive, or indeed the memory card or USB key) you want the selected deleted files to be restored to. That's it!
It's always worked perfectly whenever I've tried it. I've not needed to use it much as, obviously, I normally try to be careful and not delete files unless I'm very sure I don't need them. But whenever I've used it, it's done the job and saved my bacon. There's no guarantee it will always work, of course, but it's certainly worth a try.

Thanks for the lifesaver, Brian!

Monday 11 August 2008

"Interactive" training only works if properly planned






Increasingly, people seem to push interactive, participatory learning as a panacea: "To teach or train people properly you gotta make it interactive, man!"

Don't get me wrong, I think active learning is a good thing, and I can see the point of constructivist teaching methods based on constructivist learning theory. But even those techniques hold that "learning should build upon knowledge that a student already knows" and should promote "a student's free exploration within a given framework or structure", and they note that "While it makes sense to use these techniques as a "follow up" exercise, it may not make sense to use them to introduce material" (my emphasis). In constructivism, the teacher's role as facilitator is vital. Furthermore, critics of these methods point out that "due to the emphasis on group work, the ideas of the more active students may dominate the group’s conclusions" and that "instructors often design unguided instruction that relies on the learner to "discover or construct essential information for themselves"".

If the statement I recited rather irritatedly above is applied too simplistically as a mindless mantra without proper work or, more disappointingly, thought behind it, it just results in people not being taught or trained anything useful at all.

Participation, schmarticipation

A story. When I started using the Web a few years back, I didn't even know what a link was. I went to a formal evening class because I thought it was the quickest way to find out about this InterWeb / World Wide Web thang. (Yes, I was born before there was a Web, I'm that old.)

The first thing the tutor did was tell us to go find X on the Web, without having explained what links are, what they do, or how to find them (i.e. blue underlining, the mouse cursor changes to a hand if you hover over a link), let alone explaining what search engines are, how you get to them, and how you use them.

Without giving us that foundation, without explaining those essential basic paradigms, he might as well have told us to go walk around in circles muttering. Which is pretty much what I did, in virtuality.

I could try doing what I liked, but "participation" in itself does absolutely no good unless you already have the base level of knowledge that's needed before you can "participate" properly, before you can try things out in order to cement something you've just learned. (To top it all, he tried to hit on me.)

"Interactive" groups, pah!

It's the same with "interaction". Interaction simply for its own sake - which seems to translate to "Yeah man, let's make it 'interactive' by splitting everyone into lots of discussion groups, woohoo!" - is a pointless waste of time, and insulting to the attendees at worst.

Even so-called problem based learning requires that the group be given a specific problem to discuss and solve, and to be effective it requires motivation, feedback and reflection. One reason it works is that it "taps into existing knowledge". My emphasis, again!

Many people worship the "learning pyramid" and cite it to say that in terms of teaching methods, "discussion group" (supposedly 50%) is much better than "lectures" (supposedly 5%) for retention of information. However, investigations into its origins show that in fact there is no hard research behind those frequently cited figures.

Undoubtedly, there's a grain of truth in it. Learning actively by doing, when you're given exactly the information you need, in context and at exactly the time that you need to use it, does help boost retention when compared with memorising things by rote in abstract isolation.

But in relation to discussion groups, it all boils down to one simple but critical thing, which many organisers / speakers ought to but don't always consider: What's the purpose of this session? What are you trying to achieve, exactly?

Do you just want the attendees to admire the speaker's wit and wisdom? To be mindlessly entertained? Do you just want them to be able to say they went to a session on X in trendy surroundings (in which case all you need do is entitle the session "On X" and leave them to it)?

Or do you actually want them to learn something, so that they'll come back when they want to learn more things or other things?

If you want them to learn and retain something, then you really need to think about what that is: what are the 3 key points (certainly no more than 5) that you want them to take away with them?

Another issue is, are you trying to teach them what they want to know, or what you think they need to know - which may be quite different! And in either case, again, what is that exactly?

The latter is particularly important in a business context. My previous day job involved a fair element of training, in the sense of trying to make sure certain people know the things the business needs them to know (whether or not they think they need to know it, and even if they think they already know it. More often than not, in fact, they don't.)

So I have some practical experience of training and teaching. And that's the number 1, to me: What's the purpose of the session? That has to drive everything - who you get to speak, the way you run it, everything.

Now back to "interactive". Interaction in the sense of a discussion group can definitely help people to learn and retain more. But just saying "Now go into groups and discuss this issue", even when coupled with "Then come back and report on what you think", is, without more, an utter waste of time - unless of course the purpose isn't for people to learn anything but just to get them to socialise or to listen to the witterings of other non-experts who nevertheless think they know everything.

If the point is for them to find out what other people's opinions are on a subject, fine.

But if the point is for them to learn something about the subject that they'll remember more than a few nanoseconds later, then the discussion needs to be:
  1. Carefully planned, in detail, in advance, and

  2. Facilitated or led by someone who:
    • understands the subject sufficiently
    • has a list of the "learning points" (pre-planned, and not too many of them: the famous 7 points +/- 2 principle (or myth?) relating to how much the human memory can hold, to ensure the cognitive load isn't too heavy - it may possibly be just 3 to 5 points) that need to be brought up during the discussion to ensure they are indeed raised and understood, and
    • is able to "pick on" people to get them to answer questions and participate (it's not a discussion if they won't volunteer to talk, is it?).
Herding people into random groupings without facilitators who know what they're doing, and expecting them to organise themselves into a group that can conduct a coherent discussion on a subject they know little about (otherwise why would they bother going to the "learn all about X!" session?), is as bad as sitting someone who's never used the Web before in front of a browser and telling them to go find X. Or worse. They're not likely to learn anything, and may even be put off.

Wouldn't you be fed up if you attended a session in order to learn about a subject you're interested in, only to find yourself dumped into a group of equally ignorant attendees and forced to listen to people who know as little about the subject as you do, or even less, dominating the proceedings by virtue of egocentricity, pushiness and loudness rather than actual expertise in the subject matter?

Yes, I'm talking about Dana Centre events, yet again. If they want to annoy people they're going about it the right way. Some friends I'd previously persuaded to go have never been back, and they haven't bothered to tell Dana why, they've just taken themselves off Dana's radar. I've been giving Dana another chance, and another chance, in the faint hope of another talk like Baroness Greenfield's, but so far nada.

For a single evening session where time is limited, by far the best format is talks by experts (or a channeled discussion amongst panel of experts) before an interested audience in a single large room, followed by audience Q&A. The Royal Society for the Arts clearly know this (and the format works well for an all-day session too, when properly facilitated e.g. The Wealth of Networks 2008 conference on digital economies and the next generation internet) - but equally clearly, the Dana Center don't.

There is a place for small participative groups, but only if they're properly planned and led (which takes days or weeks of preparation time, and requires enough properly-briefed experts to facilitate each group separately).

(As for making people trek between groups from room to room including up and down stairs while trying to balance plates of food, that's just a ridiculous waste of time as well as an unnecessary inconvenience, especially when the journey time (which may add up to 15 or 20 minutes total, or more) would be much better spent allowing (static) speakers to expound on their themes more fully to the (static) audience.)

Saturday 9 August 2008

Funny product name: got a Mopidick? Want one..?






Here y' go then.


Just apply to the affected area for instant relief. Or so they say.

All yours for less than $10 on ebay. Bargain innit?

(Actually, I can highly recommend the ointment cream equivalent, Mopiko, as the most soothing thing for itchy insect bites that I've ever encountered. Not tried the Mopidick lotion though. Not sure I dare. Wonder why they had to spell the last syllable in exactly that way??)

Wednesday 6 August 2008

The REAL Improbulus






I want to clear something up: I'm the original Improbulus, the real Improbulus, and nothing but the Improbulus. One word only.

When I started blogging I wanted a unique name, a handle that no one else at all had, so after much research I picked "Improbulus" (yes it's male, but the female form sounds odd(er)!). Absolutely no one else was using this name on the Internet at the time - "Improbulus" is a Latin adjective which I decided to use as a name.

A few months later, in 2005, I noticed that someone else had adopted the name I'd made up, added another word after it, and was going round posting rabidly right-wing rants all over the Net under the name "Improbulus Maximus".

I was slightly annoyed that someone else had appropriated my carefully chosen name, but apart from adding one comment on the Daily Kos to point out that he wasn't me, I left it alone. If you search just "Improbulus", my writings ranks much higher than his anyway.

However, someone has begun posting comments on various blogs and sites asking if self-avowed liberal-hating Jim Adkisson, who recently opened fire at a children's performance in a Unitarian Universalist church in Tennessee killing 2 people and seriously injuring several others (more news reports), is Improbulus Maximus.

I've absolutely no idea if Adkisson is or is not this "Improbulus Maximus". But just in case there's any question about it, I want to make it crystal clear that I'm not "Improbulus Maximus" and "Improbulus Maximus" certainly isn't me.

Adkisson is a male American from Tennessee; "Improbulus Maximus" seems to be a pro-gun, racist homophobe (as does Adkisson). Anyone who's met me will confirm I'm a female Brit in London. Plus, I'm libertarian (more than liberal), I believe in people's right to defend themselves but think guns and knives should be much harder to come by because they too easily lead to fatalities which could and should be avoided, and I'm very much in favour of racial equality and LGBT rights. And gender equality. So he's my absolute anti-thesis. Whoever he is. Which is most emphatically not me!

Monday 4 August 2008

Diginomics: why DRM promotes piracy, & why monetizing Web 2.0 is tough, etc







By applying principles of economics and management theory to analyse the nature and characteristics of digital goods and Web 2.0, economist Thierry Rayna (Thierry Rayna's papers / articles) and management scientist Ludmila Striukova (some papers) reach some pertinent conclusions and make some interesting and thought-provoking suggestions on the economics of the digital world, specifically on:
  • Piracy and DRM - why people pirate digital goods, why current technical implementations of digital rights management don't work (in fact are counter-productive) and are bad for society and consumers, and what kind of DRM might be effective while striking a fairer balance, and

  • Monetising Web 2.0 - why it's hard for content producers / providers / publishers to make money out of Web 2.0 under traditional business models, and what sort of new business model might work to monetise Web 2.0.
I first heard their ideas at "Diginomics" (the economics of digital technologies and Web 2.0), chaired by Thierry with a panel including Ludmila, at The Wealth of Networks conference 2008 (see that post for a summary and the MP3 recording of that session). It very much typified the "eureka scenario" for me, as the economics / management perspective was totally new to me, though no doubt not to others. There were lots of "lightbulb" moments when I was going, "Aha! That makes sense! That explains it!".

I later read their detailed papers with the same sense of excitement and dawning understanding. In this post I want to share, and raise some comments and queries on, their key theses - based on their session and the following papers written by them, which Thierry kindly made available to attendees:
Blogging. To whet the appetite of bloggers, in terms of blogging a couple of points struck me from the Web 2.0 paper (pgs. 5 & 12, my emphasis):
  • "Basically, in order to create a successful blog, the time spent on marketing is expected to be, at least, the same as time creating the blog." [Well there goes ACE then...]
  • Is a successful blogger one who understands technology and keeps up with change, or one who provides quality content? "The current incentive system, even for professional bloggers, is not so much about the quality of the content, but instead about the ability of the blogger to ‘play the rules of the game’ and make their blog more known than others." [Ditto!]
[In this context, they cited a 2006 book by blogger and internet marketing consultant Chris Garrett called Killer Flagship Content. However, I'd have liked to hear more thoughts from them personally on the economics / management principles behind successfully monetising blogs, and why, in economics terms, it's more important to increase fame and popularity than quality - but perhaps that will be the subject of a future paper.]

I'm doing this post in 3 parts:
  1. A summary of the nature of digital goods

  2. What that means in relation to DRM, and

  3. The nature and features of Web 2.0 and and their implications for the monetization of Web 2.0.

1. Digital goods - their nature and special features

Digital goods - e.g. music, movies, computer games, software or documents transformed into binary code, like MP3s, Flash movies, DVDs and PDFs - are unique because they're infinitely durable (perfect copies can be made easily and cheaply in all kinds of formats), they're "public goods" (different copies can be used by different consumers at the same time ("non-rival") and producers can't stop non-paying consumers from getting and using their own copies ("non-excludable")), and they're "experience goods" (you can't assess their value to you until you've experienced them e.g. heard a song or seen a movie, so you're not willing to pay before trying it - "sampling" or free trials or tasters - yet, suppliers rarely provide adequate samples).

As digital goods are effectively public, piracy is in fact economically rational behaviour on the part of consumers - with the motivation often being to sample, not just to free ride - which reduces demand for legitimate versions; plus, cheap perfect replicability reduces the price of digital goods to zero. All this hits the ability of creators / producers / providers to recover their initial production costs never mind make profits on digital goods, so they're less willing to produce them, resulting in their "under-provision" or under-supply, which is bad for society - unless there is appropriate public intervention or an adequate protection system.

The usual solution to this kind of problem is government intervention in the form of intellectual property rights (IPRs) in order to incentivise creators. Although IPRs in the form of patents may be sufficient in the case of inventions (- or maybe not?), IPRs in the form of copyright don't have much effect in the case of digital goods, in practice, because piracy is so widespread.

2. DRM - why it doesn't work and promotes piracy, & what DRM systems could work

DRM protection is the use of technical measures built into the product (e.g. a media file) from the get go, to enable content creators / publishers to control / restrict access to digital media and/or its distribution, sharing, copying or conversion into other formats, as well as preventing its simultaneous use by more than one person. DRM is commonly implemented by encrypting the digital good and embedding in it DRM tags with information on the owner / device and their rights of usage, so that the good can't be consumed until it's activated by contacting the producer (or rather its server) to specifically identify and authenticate the consumer (check the owner /device info matches up with their records) and check the number of times it's been used, etc, and if all is as required then it authorizes the use by providing a decoding key. (The producer obviously won't activate or authorise copies it finds are pirated). The most widespread DRM system currently is FairPlay, used by Apple to control and restrict use of both audio and video content in their products and services: iPods, iTunes and iTunes Store.

DRM aims to reduce piracy of digital goods by restricting who can access them and how, controlling how many times they can be used or consumed (played), and/or restricting their lifespan to a fixed period, which anyway can't extend beyond the life of the hardware device authorised to play them. (DRM could theoretically be used to provide better, more tailored sampling, but it isn't.)

In a way, DRM should be more effective than IPRs because you can only try to enforce copyright after it's been infringed (punitive) and have to bear the enforcement costs (litigation costs etc), whereas DRM aims to stop infringement from happening (preventative).

But in fact, DRM doesn't work, and is bad for consumers and society to boot.

DRM will work only if consumers can be persuaded to buy DRM-protected goods instead of getting them from pirates or filesharing sites. But people aren't switching to buying DRM'd goods. Why?

Because unprotected versions are still easily available (all it takes is one unprotected leaked copy for pirated copies to spread all over the Net), and DRM'd goods are less valuable to consumers than non-DRM protected goods (legal or illegal) as they're inferior to non protected goods: you can't lend them, back them up or resell them secondhand; transferring them to use in other formats, media or devices is restricted or impossible; if bought online they're often of lower quality (compressed) compared with say CDs or DVDs.

Rational consumers aren't likely to buy DRM-protected goods when they can get, with wider availability:
  • (for only a slightly higher price) legal, fully-featured unrestricted non-DRM goods (e.g. CDs), or
  • (for free or negligible cost) pirated, full-featured unrestricted non-DRM goods (ripped files) obtained through illegal cracking, burning iTunes downloads to CD then ripping from that, or through the analogue hole (if you can hear or see it you can copy it).
Even those willing to buy DRM'd goods won't pay as high a price for them as for unprotected goods. And, in fact, usually DRM protected goods are indeed priced more cheaply than the unprotected version - but generally (and deliberately) not cheaply enough to persuade people who buy legal unprotected versions like CDs to buy them instead of CDs or DVDs. Suppliers feature-strip DRM'd versions in order that the profitable class of consumer who is willing to buy more expensive unprotected versions (DVDs etc) won't want to switch to "value-subtracted" DRM versions, and will keep on buying CDs and DVDs. So in practice DRM may induce some of the people who were pirating digital goods or not consuming them at all to buy DRM versions, and at least suppliers will get a bit of money from that class of consumer when they weren't before, but that's all it achieves.

As the authors put it (White Knight paper, pg.12):
"Thus, consumers are facing a dilemma. If they want to access digital goods online, they can either choose DRM protected files, which are legal, but have a low value due to the restrictions of DRM, and a comparatively high price; or they can download pirated digital goods, which are illegal, but have no restrictions, and are available at no cost. It can even be argued that law-abiding consumers are, in a way, “punished”: although they do pay for their digital goods, the digital goods they obtain have fewer features and involve tedious authorisation process. In contrast, consumers who decide to pirate obtain full featured digital goods, for free."

Consumers are generally more willing to risk buying "experience goods" (whose value is uncertain before they've consumed it) if the goods are durable and can be re-sold in the secondary market to recoup some of the initial purchase price should they not like it - e.g. printed books. But without the ability to on-sell purchased DRM-protected digital goods (because of the DRM), consumers will be even more reluctant to buy them without having tried them first. And, in fact, a major reason consumers download pirated digital goods from file sharing sites is in order to sample or try them. So unless producers come up with a better sampling strategy (at the moment it's very "one size fits all"), consumers will be even more likely to pirate. [Note: I'd be interested to see examples of exactly what kinds of alternative sampling strategies the authors have in mind as better options.]

The key point: consumers haven't been given enough incentives to buy DRM protected goods in preference to unprotected goods. On the contrary, paradoxically the restrictions and reduced features imposed by DRM, the increased risk to consumers due to the lack of a secondhand market, and the inadequacy of the samples or free trials currently provided, together all mean that DRM-protected goods are much less valuable to the consumer than unprotected digital goods (whether legal or illegal) - and the introduction of DRM has thereby increased piracy. DRM hasn't made it harder for consumers to access pirated digital goods; it's just made legal DRM-protected goods less attractive to consumers than pirated goods, and it's also made legitimate buyers more willing to share their purchased digital goods.

DRM is also bad for society because the above factors mean that consumers avoid buying DRM-protected digital goods, leading to their "under-utilisation" - which like "underprovision" is bad for social welfare.

Furthermore, DRM systems are usually incompatible with each other and non-interoperable, partly because of the lack of common standards for DRM, so there's a risk of anti-competitive, even monopolistic, behaviour on the part of suppliers, and consumers have to bear switching costs (which are generally more important in networked than non-networked environments) if they move systems - which adds to their reluctance to buy DRM-protected goods, as they may not want to be "locked in" to one system. So from a public policy viewpoint, as a minimum pre-requisite for DRM to be considered socially beneficial there would need to be a universal DRM technology based on open standards.

However, even if DRM systems were standardised, the benefits of DRM for society are still questionable. It aims to prevent piracy, but piracy can never be prevented while non-protected digital goods are available - even a single unprotected copy is enough to start "a stream of piracy".

The authors conclude that current DRM systems are wasteful and socially undesirable, decreasing the welfare of society as a whole, because:
  • DRM doesn't encourage consumers to buy DRM-protected goods and doesn't stop piracy, so pirating consumers are in the same position as before the introduction of DRM while law-abiding consumers are worse off due to the lower value of DRM'd goods, and
  • DRM is costly - to produce deliberately value-subtracted goods involves additional costs (it would actually be cheaper for firms to distribute full featured legal goods than to strip them down), and there are also costs to develop and continually upgrade DRM systems to counter new cracks, which together outweigh the benefits to suppliers of the extra money they might get from persuading pirating consumers or non-consumers to buy DRM'd goods (recall that they're unlikely to get buyers of more expensive unprotected goods to switch, and indeed they don't want to).
Also, DRM systems make anonymity of consumption impossible (since they work by identifying users) so they have privacy implications, which is another social concern - the collection of information about consumers and their consumption activities, often without their knowledge. These concerns may further deter consumers from buying DRM restricted products. (See the Privacy paper for definitions of privacy, the authors settled on: "Privacy generally guarantees that personal information, which is not in the public domain, is not released without authorisation.")

As the authors say (White Knight paper, pg.17):
"Instead of stripping digital goods of their distinctive positive features, firms using DRM should instead increase the value of protected digital goods. So far, law abiding consumers are punished for their honesty: the digital goods they pay for have less features than pirated digital goods. Such consumers should, on the contrary be rewarded. It is clear when examining the current DRM policies used by the firms that they do not use DRM to its full potential, but merely as a way to capture additional surplus from honest consumers, who end up paying for pirating consumers. DRM is a very powerful tool, and it could enable firms to achieve near-first degree price discrimination [i.e. charging individual consumers differently, selling at a higher price to someone who's willing to pay more]. But this would certainly require a complete rethinking of firms marketing and pricing strategies."

Is there a form of DRM which would reduce piracy while protecting privacy?

DRM could theoretically be designed to collect enough information to allow first degree price discrimination (charging different individual customers different prices depending on how much the individual is prepared to pay), to make more money for suppliers. If DRM tracks every consumption of a particular good (e.g. each time you play a song), they can work out the value of the good (or type of good) to the individual consumer and charge them accordingly (Privacy paper pg. 6): "For example, consumers who listen to a certain group/artist on a regular basis could be charged a standard price when a new album is released, whereas those who are not familiar with this group/artist could be offered a discount to encourage the purchase."

However, current DRM systems are designed to collect relatively little user information. First degree price discrimination isn't commonly used for digital goods because it's unprofitable - as long there are alternative sources for digital goods, the consumer would switch to a different supplier for goods they value more, buying only goods they value less (and which are therefore priced lower, e.g. discounted) from the discriminating supplier, so the discriminating seller would end up making less and less money. Also, of course, consumers are unlikely to be willing to disclose enough information to allow firms to charge them more for the products they like! If DRM attempted to track this information, it would further reduce the demand for DRM protected goods and increase demand for unprotected goods.

The authors suggest the possibility of what they call a "mutually advantageous disclosure" or "rewarded disclosure" DRM system: firms would pay (share with?) consumers a certain portion of their higher profits (higher due to being able to apply first degree discrimination), in return for consumers disclosing the extra information, and the result would be better for consumers, firms and society (including allowing firms to make enough money to recover their initial sunk costs i.e. fixed costs of production). For the maths behind this idea, which I won't even try to go into, see their model of the demand function on pages 7-8 of the Privacy paper and their graph - quantity on the x axis, price on the y axis.

The authors note that price discrimination would be easier for repeat consumption products (music, software, games) or products supplied in parts (TV show series / serials) than for films or books. Also, consumers who value privacy intrinsically may still not be willing to disclose more personal information unless the reward is still higher - even so, the authors think the potential gains would make it worthwhile for suppliers to offer the higher reward.

[Personally, especially as I'm no expert in economics, I would find it helpful to see concrete hypothetical examples, with figures, of precisely how all this might work. What extra personal information would be given, precisely? How would suppliers calculate exactly what amount should be "given back" to which individual? Would each person get the same proportion, or would different individuals be rewarded differently? I think a major issue will be consumer trust, and transparency on the part of providers. Can consumers trust that suppliers, having been given enough information to make even more money from them, will actually then pay the due reward over to consumers? How will an individual know if the amount of "reward" rebated to them is the correct fair amount? And how do they ensure they'll receive the right amount?]

Another interesting idea is for an alternative type of DRM. Current DRM systems work by authenticating users and controlling lifespan. The authors suggest it's possible to design what they call a "rivalness-based DRM system" that ensures each unit of the digital good (e.g. a media file) can only be used by one consumer at one time - i.e. by:
  • identifying each unit of the good (e.g. through a unique ID code or serial number for each unit), rather than identifying the individual customer, and
  • contacting a central server before each consumption to check that the unit is not already being used.
The key point: any number of copies of a unit can be made, but if someone is already consuming a copy of that unit (e.g. playing a music or movie file), the server won't let anyone else play any other copy of the same unit. Owners of other units of that product can however play their units independently.

Example: say I record a death metal version of Greensleeves. With a rivalness-based DRM system, unit 1 of my recording is given the unique serial / ID no. of 001. Unit 2 bears unique serial no. 002. And so on. My mum, who buys unit 1 (001), can make as many copies of unit 1 as she likes, and store copies on e.g. her home computer, work computer, portable MP3 player, car MP3 player etc - but only one of those copies can be played at a time. As long as it's still playing on her home computer, the copy on her iPod (or car etc) can't be played. However, playing unit 001 won't stop the owner of 002 from playing their unit, because it has a different ID number (even though it's the same recording of the same song). So my best friend, who bought unit 2 ID no. 002 of my recording (with only a little arm twisting), can play her unit 2 at the same time as my mum is playing her unit 1. Different units can be used at the same time; but different copies of the same unit can't be.

This method has advantages for both suppliers and buyers:
  • It allows copying to different devices and for backup; only one copy can be played at a time, but other copies can be made and used.
  • It identifies the unit, not the consumer, and tracks usage of the unit, not the individual consumer - thus preserving personal privacy and anonymity.
  • It reduces consumers' willingness to share copies and so reduces the dissemination of illegal copies - people are happy to share music and video files because sharing doesn't deprive them of the use of their own copy in future. But if only one person can play a copy of any one unit at a time, the original owner / buyer of that unit won't be so willing to let other people have copies, because if someone else happens to be already playing a copy of the unit at a time when the original owner wants to play it, the original owner won't be able to play it; and the more people that have a copy of a unit, the more likely it is that someone else will be already using it when the owner (or anyone else) wants to. So, legitimate owners have good reasons not to share copies of their files (just as they'd be reluctant to lend someone else their car, or garden fork, if they know they're going to be needing it to use it). This also means that there will be fewer illegal copies around, as most copies would remain in the hands of only their original legal buyers, so if a crack is discovered suppliers would have more time to update their DRM system to counter it before illegal copies became too widespread.
  • It re-aligns the interests of suppliers and consumers by transferring the burden of piracy on to consumers. Currrently consumers don't suffer from piracy (except in the indirect, weak sense of fewer digital goods being created generally). In fact, they benefit from it, because piracy results in more goods they can get for free. With rivalness-based DRM, it's not just suppliers who suffer from piracy (due to decreased sales) - it's consumers too (due to inability to consume the goods they've bought if too many copies of their unit are in use).
  • It decreases the value of pirated digital goods and piracy generally - there's little incentive for consumers to acquire or disseminate pirated copies, because they can't use their copy if someone else is already using it, and the more copies that are spread around, the more likely it is that someone else is already using it, so in time it would become totally unusable. A pirating consumer currently bears a (small) risk of being sued for copyright breach; with a rivalness-based DRM system, the pirating consumer would a face a (more likely, and much larger) risk of not being able to use the pirated good, a risk that would increase as more people pirated it.
There's one obvious difficulty. A rivalness-based DRM system requires collecting usage information in real time, for all copies of particular units - but how can it do that if the media player or computer isn't connected to the Net at the time? The authors suggest collecting and storing the usage information and uploading it to the server only as and when the device is connected to the internet (or to a connected computer), and if it's found that more than one copy has been used on more than one machine at the same time, it will trigger a "punishment mechanism" stopping all copies of that unit from being used for a certain period of time (or other units, in the case of goods consumed only once). [Is this a loophole? It may be possible to ensure certain machines are never connected to the Net or to a computer that's connected to the Net; if copies are played only on those machines, their use can never be detected and they can never be "punished". If the device is a multimedia phone, checking existing usage over the Net would also incur data charges for the phone owner, perhaps behind the scenes, which won't endear the system to the owner unless they have a flat rate unlimited data plan; and if it's an N95 smartphone like mine, the owner can just deny the player permission to access the Net! I suppose the system could be set up so that if a unit's usage is not checked at least once every say 3 or 5 plays on the same device then it stops working, in order to force the owner to connect it.]

Their suggested alternative system is certainly ingenious and it will be interesting to see if anyone can create such a system and get it to work; personally, being keen on my privacy, I like the sound of this option much better than the "Keep existing DRM but pay consumers to disclose more personal information" option.

Now, on to how to monetize Web 2.0.

3. Monetizing Web 2.0

The really major issues for society which the authors raise on Web 2.0 (apart from the old chestnut of copyright), are:
  • How do you fund the production of original content on Web 2.0? The most successful strategy so far seems to be to provide free content whose production is funded though selling ads.

  • How do you incentivise the production of quality content? On Web 2.0, more often than not the monetary rewards go, not to the producers or creators of that content, but to intermediaries (with the exception of ad-funded blogs and the like).
Incentives to Web 2.0 content producers are currently mainly provided not through price, as in a traditional free market, but through advertisement income. The problem this creates is that the relationship between ads income and value is loose, so market distortions and inefficiencies are likely to result. [Here I'd have liked to see reference to studies on the links between ad income and actual market / social value, or other explanations why they're not closely coupled enough. Or maybe that's a principle that's just obvious to economists? Especially as the authors acknowledged that advertising has still proved to be the most successful model so far, I'd have liked more on why they think it doesn't work, and on why and in what way advertisement income and market value aren't tightly coupled enough.]

Furthermore, most Web 2.0 content creators are mainly incentivised by non-financial considerations (reputation, altruism etc) which don't tie in with the social value of what they produce.

Web 2.0 doesn't operate like a simple traditional market, because of two key factors: the economic nature of digital goods, explained above (which means the main issue with monetising Web 2.0 is the large volume of freeriding, i.e. piracy), and transaction costs coupled with the low value of most Web 2.0 content. So, what are "transaction costs"?

Transaction costs and search costs

In economics terms, transaction costs (the costs incurred in making an economic exchange) will include search costs (the costs of checking what's the best or most suitable product, where is it available, which seller offers the lowest price etc).

There are 2 types of search costs: external search costs (monetary cost and opportunity cost of the time taken searching) - depends on technology etc, and are usually the same for everyone; and internal search costs (cognitive costs) - which vary with the individual consumer (thinking time / load to formulate search queries, analyse results in order to make decisions, etc).

Search engines like Google have reduced external search costs effectively to zero, but cognitive costs remain, and indeed will grow with the amount of information to be processed. Search costs are further increased by the availability of more and more content generally (because Web 2.0 has facilitated content creation by the masses), more and more of which is relatively invisible / inaccessible content i.e. private" content which is not indexed by the search engine (whose existence also increases external search costs), and more and more of which is multimedia content like videos, images - not accessible to search engines unless the content producer tags it.

Web 2.0 - the costs, incentives and inefficiencies

It costs content creators / publishers time / money to tag their content accurately, and producers by and large don't directly benefit from their contributions, so why should they expend even more time / effort on tagging? So, they don't.

Now if I upload a video without tagging it, it costs other people (lots of people, potentially the whole connected world) extra time to try to find my video via current search engines (or, perhaps more likely, to find other stuff that they're really looking for amongst the extra "noise" added by my content!). The publisher fails to tag, yet it's society, not the publisher, who has to bear the extra (retrieval) costs. It costs society more than it cost the creator. (That's what economists call a "negative externality").

The result, as with other negative externalities: too much untagged multimedia content is produced by publishers, increasing search costs more and more, which, as the authors put it, is socially inefficient. Less content, but fully tagged, would actually be more beneficial to society.

Most contributors don't benefit directly (or indeed financially) from their own produced content. So, why do people contribute to Web 2.0? Incentives are similar to those with open source software: the immediate satisfaction of bug fixing/producing content, delayed benefits (reputation, ego gratification, career improvement), altruism and community identification. These incentives, along with the desire to publish accurate facts particularly in areas of personal interest (Wikipedia contributors), and the attractions of receiving positive (or indeed any) feedback, are sufficient to motivate production of at least some content.

Also, although it's rare, some professional bloggers do make money, a few of them good money, through ads, affiliate commissions, product sales, donations etc. However, substantial investment is needed first to generate reputation (participation in forums, online communities, social networking sites) and traffic (marketing the blog: knowledge of blog publishing software, feed aggregators, blog carnivals, SEO, tagging etc). Indeed, to create a successful blog, as much time has to be spent on marketing as creating content.

The main issue is that, presumably with the exception of professional blogs (which are rare), the (often subjective) incentives for Web 2.0 contributors, the private benefits perceived by the contributors, don't match up to the social value, the actual benefit to society, of their contributions: contributions benefit the contributor more than they benefit society. (In economics terms, there's a "positive externality".)

Although there are some incentives to produce valuable high quality Web 2.0 content, unfortunately there's no Web 2.0 mechanism to systematically ensure that the incentives for producers / creators match up closely enough with the social value of their contributions. (Despite the long tail concept, which might suggest that all content has some social value, research shows that in fact, on e.g. YouTube, 10% of videos in fact account for 90% of the views.)

Web 2.0 is unusual in being the source of both negative and positive externalities at the same time. The production of more Web 2.0 content costs society more than it costs the creator; and the extra content benefits the creator more than it benefits society (Web 2.0 paper pg. 9): "Unfortunately, this means that low (social) value content is very likely to be over-produced, while high (social) value content is, probably, produced in insufficient quantity, thereby leading to an inefficient outcome".

The large supply of content reduces the market value of all online content generally, and thereby reduces the supply of good quality content. Too much free (or low cost) low quality content competing for consumers' time and attention crowds out the good quality content (in economics, the market for "lemons"). Where there are information asymmetries between buyers and sellers, e.g. regarding the quality of Web 2.0 content (producers know more about the content than consumers do; consumers won't know the quality of the content till after they've consumed it), what's known as "adverse selection" applies: "bad" products are more likely to be selected than good ones.

Where there's adverse selection, consumers' willingness to pay is usually a weighted average of the quality present in the market. So if there's proportionally a lot of low quality content, consumers' willingness to pay becomes close to zero - so much so that higher quality content may be driven out of the market altogether.

To counter adverse selection, producers normally use "signalling" to indicate the quality of their product objectively and clearly. But signalling is unlikely to be effective in Web 2.0: usual signalling strategies (guarantees, money back etc) don't generally apply to Web 2.0 content; the cost (especially for bloggers) of signalling may not be thought worthwhile; the value of content to the consumer is often subjective rather than objective in Web 2.0, so it's hard to arrive at universal signalling criteria; and their lack of market knowledge / experience in assessing the market or social value of their product means the multitude of amateur producers may overestimate the quality of their own content and decide to signal, while producers of higher quality content may not signal; and finally, professional bloggers need to spend as much time promoting their blogs as writing them, as the current incentive system is more about the blogger's ability to play the rules of the game and make their blog more well known than others than it is about the quality of their content. [Is the point here that signalling doesn't work with blogs because it's fame rather than quality which matters there? Here I'd have liked to see more evidence / research and economics explanations as to why that's the case.]

In summary, Web 2.0 is largely non-monetary and suffers from economic inefficiencies like search costs, crowding out and adverse selection: the costs and benefits of producing extra content, to the creator and to society, don't match up, so too much low quality content is produced, which drowns out / drives out the good content.

Possible business models for Web 2.0

The authors, as previously mentioned, feel that the "free, funded by advertisements" business model often used in Web 2.0 is inefficient, in terms of the market and society, because there isn't close enough a link between ad income and the market / social value of the Web 2.0 content produced. What alternative models might be used?

Pay per use.
Monetizing Web 2.0 by switching from a free to "pay per use" or "pay per access" model would involve large transaction costs as there would be an enormous number of transactions owing to the huge amount of content and large numbers of producers.

Micropayments are the most likely way due to the relatively low value of most content. But micropayments won't be worthwhile, and so won't be adopted, unless the transaction costs are low enough - and that's not just the monetary costs e.g. payment systems fees, but also opportunity costs (time spent on and leading up to the purchase) and cognitive costs (again on and leading up to that particular purchase).

For consumers, "pay per use" costs them in terms not only of money but also time and cognitive costs, e.g. checking out options fully to ensure they're forking out their hard-earned (even micro) cash on the best value product for them. And with experience goods, even extensive research may not help - only actual consumption enables proper evaluation. Also, there are the costs of coordinating with the supplier, entering into a contractual relationship, and the potential costs of dealing with any problems that might arise (which people would shrug off if it was free). For relatively low value goods, would the extra "hassle factors" (including search costs) be worth it to consumers, even if the actual monetary spend is very small?

For producers / providers, even reducing transaction costs for pay per use might not make pay per use profitable enough - with information goods the best route to profit has normally been to bundle information goods and/or their use, so much so that different independent producers get together to package all their goods together into one bundle sold in one transaction (e.g. MacHeist), and thereby lower transaction costs.

All of which is why the authors think pay per use doesn't seem to be a viable way to monetise Web 2.0.

Subscription fees. A subscription route may be better (e.g. to a feed), as it would reduce the volume of transactions (subscription once per year instead of per use). But it won't work for all Web 2.0 content - e.g. irregularly published content is hard to price, and there would still be lots of transactions given the large number of content producers. A subscription via an intermediary like YouTube, probably flat fee to reduce transaction costs, might work - they could then further divvy up fees amongst creators based on e.g. usage. Yet no attempts along those lines have so far worked out. [Note: I'd have liked to see examples here, and suggestions of reasons based on principles of economics as to why they've not worked out and aren't considered viable.]

In Web 2.0, participants are both consumers and providers, so exchanges between them are many and frequent (which according to transaction cost theory is why firms and corporations arose: there's a point where transaction costs become too high for exchanges between individuals in a market environment, resulting in the creation of "non-market entities" like companies). Charging for so many exchanges would be very costly, particularly in terms of time and cognitive resources, so it's not surprising that a very collaborative environment like Web 2.0 has developed into an environment which is primarily non-monetary. But will it always stay that way?

Another way to monetise Web 2.0 - a "demand-driven" Web?

The authors believe the key challenge to monetising Web 2.0 is: how to better align incentives for producers with social value (i.e. how to incentivise providers to produce higher quality content), in light of the economic characteristics of digital goods as public goods (which means piracy can't in practice be prevented, so reproduction / distribution of existing digital goods won't be very profitable), without incurring excessive transaction costs (e.g. how to reduce the volume of transactions).

They suggest one method which would address all 3 of those issues: instigate a demand-driven Web 2.0, instead of or alongside the current supply-driven Web 2.0 - i.e. publish content (whether pre-existing or created to order) only to order.

That should incentivise the production of high quality content as those demanding it would be willing to pay for it, and producers who believe their content has high value would wait for demand before publishing (while those who think their content is relatively low value will still continue to publish).

Access to the first ever unit of any digital good can be fully controlled, so at the time of first publication it can be charged for, but after it's been published copies will become available to lots of people over time, so there's little point in insisting that every copy has to be charged for forever, or trying to restrict access or copying (and a compulsory charge only for the initial publication would also reduce transactions, and therefore transaction costs, compared with pay per use). However, because it will take time for the goods to spread amongst consumers, some people may still be willing to pay for early access to it (as in the case of e.g. breaking news), so it's possible for the producer and first buyers to charge for access in the meantime - which means initial buyers should be willing to pay even more for the first unit, as they can charge for access to their copy in the early days. A new field of research in economics has in fact shown that an efficient competitive market can be achieved with digital goods, even though they're public goods, as long as there is "finite expansibility" (i.e. they don't spread through the economy instantly).

The authors think such a demand-driven Web would involve an intermediary with whom potential providers register existing content on offer (e.g. tagged holiday pics) or their ability to produce content (e.g. coding skills), and potential consumers register their needs and demands. The intermediary's system would match suppliers and buyers, who would agree on price, etc. After supply of the digital good, although the producer retains the copyright in it everyone - buyers too - would be allowed to distribute or resell it. A reputation system could also be used (feedback, ratings etc presumably).

While a demand-based system would not guarantee an efficient market, it could work alongside and complement the existing Web 2.0 to increase the proportion of high quality content on Web 2.0 and reduce search costs (because less low value content would be published and/or it would incentivise the tagging of content to make it easier to match with demand).

[Question: this might work for some things, but how would it work for other things where there isn't an existing known demand, like new music? Fans of well known bands may be willing to pay for them to produce more songs, but what about unknowns? I suppose this is where something like Creative Commons free samples comes into it, to build up a fan base. But then persuading people to thereafter pay for new material - is that feasible? And if even the hugely popular bestselling Stephen King couldn't get fans to pay enough for him to finish The Plant serial, what hope for others? I'd love to hear the authors' thoughts on the "Give recordings away, make money on touring and merchandising" approach, or any alternatives. Songwriting to order, back to the days of commissioned composers, Mozart etc?]

Aside: demand-driven approach for NGO's?

The "demand-driven", "first buyer pays" (or perhaps, rather, "first request is paid for") concept seems to me to very much tie in with the approach Alan Mitchell wrote about in FT 23 March 2008 of The Key, an experimental problem-solving community formed by 2 UK government-sponsored agencies (including the Training and Development Agency for Schools) and provided by lifestyle and concierge / management services company Ten UK, which "tries to combine human beings’ ability to understand significance and meaning with the efficiencies of new technologies" in order to build up a very focused and relevant knowledge base for a particular community. It started offline, but is now moving to the Web.

The idea is to provide information within a tightly focused community facing similar problems (in this case head teachers and other school leaders) by using individuals’ specific questions to define the con­tent of what's provided (my emphases):

"In The Key’s first phase, school leaders phone or e-mail questions to researchers who find the best possible answer from official sources, experts and published res­earch. The researchers, some of whom are former school leaders, compile a full answer, with references, sources and suggestions for further reading, and tag it for future reference.

At first sight, the model looks econ­omic nonsense. Paying for hum­­an beings to research answers to tricky questions from potentially 20,000 school leaders, one by one, would be expensive. But they are all facing similar problems; and the more times a question comes up, the lower the cost per answer. The aim is to manage the resulting information so that each ans­wer adds to an ever-expanding knowledge base. In the first four weeks, half the questions required new research. At three months, nearly 90 per cent could be answer­ed using existing content...

...nine out of 10 users saying it has saved an average of five hours per question. In addition, most say it has improved the way the school works because better decisions are made more quickly..."

In the second phase previously-answered questions are being made available on the Internet for direct access to the knowledge base. Thereafter, the researchers should only need to answer new questions and update old ans­wers. "The goal is to turn ignorance (individuals’ questions) into a valuable resource".

While this approach could extend to other fields in both public and private sectors, important issues need to be addressed to make it workable. The service must be sufficiently usable and user-centric (in terms of not just the information content provided but also the structure of the website (e.g. number and type of questions asked) and language used (the words and phraseology actually used by school leaders in asking questions, rather than by policymakers or government officials). Costs management is necessary for the community to be economically viable; relative costs could rise too high if the volume of questions falls too much, which means the knowledge base will be less useful and may not be worth the investment, or if the community is too diffuse and not narrowly-focused enough, as that means the same questions will not be repeated and again the investment not worthwhile, or if too many users use phone or e-mail rather than web self-service for their questions (which they will if the search system isn't up to scratch). User confidence and trust in the answers need to be be maintained, i.e. that the answers will remain unbiassed (in this particular case, will politicians and civil servants stay out of the way and let policy implementers rather than policymakers set the information agenda?!) and also I think that the answers will be accurate (confidence that the researchers had suitable expertise in the field both theoretical and practical).

So it seems a possible business model would be: find a narrowly-focused field or community (a narrow focus seems to be good for blog SEO too, see item 2.7 of that post ), set up as an intermediary for that community (find and pay researchers with appropriate expertise to answer questions of course - as employees? or more likely independent contractors), charge for the service (perhaps just for running it, and/or a cut for each answer), but critically make sure the service and website are sufficiently well structured and useable for users to find what they are looking for efficiently and quickly. However, I think this approach would be less sustainable in an area where new questions keep cropping up (e.g. software support where they keep changing the software very often? Indeed, isn't this approach much the same as that already in use within corporations that provide support for their products or services, to build up their FAQs / customer knowledge base?).

Like me you may also wonder whether The Key's approach is more useful as a way to help a narrowly-focused, non-profitmaking community, like NGOs operating in certain niche or specialist fields where things don't change too rapidly, to club together (perhaps with some government support) to help build up a communal knowledge base more cost-effectively, than as a way to make profits (though it does seem to be a way for the intermediary, in this case Ten UK, to make money, especially given that it's government-financed or subsidised!). Widen the user base and charge others outside the original community a subscription fee to access it once it's been built up?

Back to the wider issue of the suggested demand-driven Web 2.0, I think usability, costs management and maintaining user trust will be as important for a demand-driven Web 2.0 system as for The Key. It will be interesting to see to what extent a demand-driven system arises and is profitable. And I'm still not sure what system could be devised that would reward and incentivise creators of Web 2.0 content adequately and more than the intermediary / middleman - to me, as a writer and musician, for the sake of the creative industries and innovation, a system that rewards producers more than distributors or middlemen would be the best. I do wonder if a demand-driven web would be more profitable for the intermediary matchmaker than the content providers...?