Pages

Monday 4 August 2008

Diginomics: why DRM promotes piracy, & why monetizing Web 2.0 is tough, etc







By applying principles of economics and management theory to analyse the nature and characteristics of digital goods and Web 2.0, economist Thierry Rayna (Thierry Rayna's papers / articles) and management scientist Ludmila Striukova (some papers) reach some pertinent conclusions and make some interesting and thought-provoking suggestions on the economics of the digital world, specifically on:
  • Piracy and DRM - why people pirate digital goods, why current technical implementations of digital rights management don't work (in fact are counter-productive) and are bad for society and consumers, and what kind of DRM might be effective while striking a fairer balance, and

  • Monetising Web 2.0 - why it's hard for content producers / providers / publishers to make money out of Web 2.0 under traditional business models, and what sort of new business model might work to monetise Web 2.0.
I first heard their ideas at "Diginomics" (the economics of digital technologies and Web 2.0), chaired by Thierry with a panel including Ludmila, at The Wealth of Networks conference 2008 (see that post for a summary and the MP3 recording of that session). It very much typified the "eureka scenario" for me, as the economics / management perspective was totally new to me, though no doubt not to others. There were lots of "lightbulb" moments when I was going, "Aha! That makes sense! That explains it!".

I later read their detailed papers with the same sense of excitement and dawning understanding. In this post I want to share, and raise some comments and queries on, their key theses - based on their session and the following papers written by them, which Thierry kindly made available to attendees:
Blogging. To whet the appetite of bloggers, in terms of blogging a couple of points struck me from the Web 2.0 paper (pgs. 5 & 12, my emphasis):
  • "Basically, in order to create a successful blog, the time spent on marketing is expected to be, at least, the same as time creating the blog." [Well there goes ACE then...]
  • Is a successful blogger one who understands technology and keeps up with change, or one who provides quality content? "The current incentive system, even for professional bloggers, is not so much about the quality of the content, but instead about the ability of the blogger to ‘play the rules of the game’ and make their blog more known than others." [Ditto!]
[In this context, they cited a 2006 book by blogger and internet marketing consultant Chris Garrett called Killer Flagship Content. However, I'd have liked to hear more thoughts from them personally on the economics / management principles behind successfully monetising blogs, and why, in economics terms, it's more important to increase fame and popularity than quality - but perhaps that will be the subject of a future paper.]

I'm doing this post in 3 parts:
  1. A summary of the nature of digital goods

  2. What that means in relation to DRM, and

  3. The nature and features of Web 2.0 and and their implications for the monetization of Web 2.0.

1. Digital goods - their nature and special features

Digital goods - e.g. music, movies, computer games, software or documents transformed into binary code, like MP3s, Flash movies, DVDs and PDFs - are unique because they're infinitely durable (perfect copies can be made easily and cheaply in all kinds of formats), they're "public goods" (different copies can be used by different consumers at the same time ("non-rival") and producers can't stop non-paying consumers from getting and using their own copies ("non-excludable")), and they're "experience goods" (you can't assess their value to you until you've experienced them e.g. heard a song or seen a movie, so you're not willing to pay before trying it - "sampling" or free trials or tasters - yet, suppliers rarely provide adequate samples).

As digital goods are effectively public, piracy is in fact economically rational behaviour on the part of consumers - with the motivation often being to sample, not just to free ride - which reduces demand for legitimate versions; plus, cheap perfect replicability reduces the price of digital goods to zero. All this hits the ability of creators / producers / providers to recover their initial production costs never mind make profits on digital goods, so they're less willing to produce them, resulting in their "under-provision" or under-supply, which is bad for society - unless there is appropriate public intervention or an adequate protection system.

The usual solution to this kind of problem is government intervention in the form of intellectual property rights (IPRs) in order to incentivise creators. Although IPRs in the form of patents may be sufficient in the case of inventions (- or maybe not?), IPRs in the form of copyright don't have much effect in the case of digital goods, in practice, because piracy is so widespread.

2. DRM - why it doesn't work and promotes piracy, & what DRM systems could work

DRM protection is the use of technical measures built into the product (e.g. a media file) from the get go, to enable content creators / publishers to control / restrict access to digital media and/or its distribution, sharing, copying or conversion into other formats, as well as preventing its simultaneous use by more than one person. DRM is commonly implemented by encrypting the digital good and embedding in it DRM tags with information on the owner / device and their rights of usage, so that the good can't be consumed until it's activated by contacting the producer (or rather its server) to specifically identify and authenticate the consumer (check the owner /device info matches up with their records) and check the number of times it's been used, etc, and if all is as required then it authorizes the use by providing a decoding key. (The producer obviously won't activate or authorise copies it finds are pirated). The most widespread DRM system currently is FairPlay, used by Apple to control and restrict use of both audio and video content in their products and services: iPods, iTunes and iTunes Store.

DRM aims to reduce piracy of digital goods by restricting who can access them and how, controlling how many times they can be used or consumed (played), and/or restricting their lifespan to a fixed period, which anyway can't extend beyond the life of the hardware device authorised to play them. (DRM could theoretically be used to provide better, more tailored sampling, but it isn't.)

In a way, DRM should be more effective than IPRs because you can only try to enforce copyright after it's been infringed (punitive) and have to bear the enforcement costs (litigation costs etc), whereas DRM aims to stop infringement from happening (preventative).

But in fact, DRM doesn't work, and is bad for consumers and society to boot.

DRM will work only if consumers can be persuaded to buy DRM-protected goods instead of getting them from pirates or filesharing sites. But people aren't switching to buying DRM'd goods. Why?

Because unprotected versions are still easily available (all it takes is one unprotected leaked copy for pirated copies to spread all over the Net), and DRM'd goods are less valuable to consumers than non-DRM protected goods (legal or illegal) as they're inferior to non protected goods: you can't lend them, back them up or resell them secondhand; transferring them to use in other formats, media or devices is restricted or impossible; if bought online they're often of lower quality (compressed) compared with say CDs or DVDs.

Rational consumers aren't likely to buy DRM-protected goods when they can get, with wider availability:
  • (for only a slightly higher price) legal, fully-featured unrestricted non-DRM goods (e.g. CDs), or
  • (for free or negligible cost) pirated, full-featured unrestricted non-DRM goods (ripped files) obtained through illegal cracking, burning iTunes downloads to CD then ripping from that, or through the analogue hole (if you can hear or see it you can copy it).
Even those willing to buy DRM'd goods won't pay as high a price for them as for unprotected goods. And, in fact, usually DRM protected goods are indeed priced more cheaply than the unprotected version - but generally (and deliberately) not cheaply enough to persuade people who buy legal unprotected versions like CDs to buy them instead of CDs or DVDs. Suppliers feature-strip DRM'd versions in order that the profitable class of consumer who is willing to buy more expensive unprotected versions (DVDs etc) won't want to switch to "value-subtracted" DRM versions, and will keep on buying CDs and DVDs. So in practice DRM may induce some of the people who were pirating digital goods or not consuming them at all to buy DRM versions, and at least suppliers will get a bit of money from that class of consumer when they weren't before, but that's all it achieves.

As the authors put it (White Knight paper, pg.12):
"Thus, consumers are facing a dilemma. If they want to access digital goods online, they can either choose DRM protected files, which are legal, but have a low value due to the restrictions of DRM, and a comparatively high price; or they can download pirated digital goods, which are illegal, but have no restrictions, and are available at no cost. It can even be argued that law-abiding consumers are, in a way, “punished”: although they do pay for their digital goods, the digital goods they obtain have fewer features and involve tedious authorisation process. In contrast, consumers who decide to pirate obtain full featured digital goods, for free."

Consumers are generally more willing to risk buying "experience goods" (whose value is uncertain before they've consumed it) if the goods are durable and can be re-sold in the secondary market to recoup some of the initial purchase price should they not like it - e.g. printed books. But without the ability to on-sell purchased DRM-protected digital goods (because of the DRM), consumers will be even more reluctant to buy them without having tried them first. And, in fact, a major reason consumers download pirated digital goods from file sharing sites is in order to sample or try them. So unless producers come up with a better sampling strategy (at the moment it's very "one size fits all"), consumers will be even more likely to pirate. [Note: I'd be interested to see examples of exactly what kinds of alternative sampling strategies the authors have in mind as better options.]

The key point: consumers haven't been given enough incentives to buy DRM protected goods in preference to unprotected goods. On the contrary, paradoxically the restrictions and reduced features imposed by DRM, the increased risk to consumers due to the lack of a secondhand market, and the inadequacy of the samples or free trials currently provided, together all mean that DRM-protected goods are much less valuable to the consumer than unprotected digital goods (whether legal or illegal) - and the introduction of DRM has thereby increased piracy. DRM hasn't made it harder for consumers to access pirated digital goods; it's just made legal DRM-protected goods less attractive to consumers than pirated goods, and it's also made legitimate buyers more willing to share their purchased digital goods.

DRM is also bad for society because the above factors mean that consumers avoid buying DRM-protected digital goods, leading to their "under-utilisation" - which like "underprovision" is bad for social welfare.

Furthermore, DRM systems are usually incompatible with each other and non-interoperable, partly because of the lack of common standards for DRM, so there's a risk of anti-competitive, even monopolistic, behaviour on the part of suppliers, and consumers have to bear switching costs (which are generally more important in networked than non-networked environments) if they move systems - which adds to their reluctance to buy DRM-protected goods, as they may not want to be "locked in" to one system. So from a public policy viewpoint, as a minimum pre-requisite for DRM to be considered socially beneficial there would need to be a universal DRM technology based on open standards.

However, even if DRM systems were standardised, the benefits of DRM for society are still questionable. It aims to prevent piracy, but piracy can never be prevented while non-protected digital goods are available - even a single unprotected copy is enough to start "a stream of piracy".

The authors conclude that current DRM systems are wasteful and socially undesirable, decreasing the welfare of society as a whole, because:
  • DRM doesn't encourage consumers to buy DRM-protected goods and doesn't stop piracy, so pirating consumers are in the same position as before the introduction of DRM while law-abiding consumers are worse off due to the lower value of DRM'd goods, and
  • DRM is costly - to produce deliberately value-subtracted goods involves additional costs (it would actually be cheaper for firms to distribute full featured legal goods than to strip them down), and there are also costs to develop and continually upgrade DRM systems to counter new cracks, which together outweigh the benefits to suppliers of the extra money they might get from persuading pirating consumers or non-consumers to buy DRM'd goods (recall that they're unlikely to get buyers of more expensive unprotected goods to switch, and indeed they don't want to).
Also, DRM systems make anonymity of consumption impossible (since they work by identifying users) so they have privacy implications, which is another social concern - the collection of information about consumers and their consumption activities, often without their knowledge. These concerns may further deter consumers from buying DRM restricted products. (See the Privacy paper for definitions of privacy, the authors settled on: "Privacy generally guarantees that personal information, which is not in the public domain, is not released without authorisation.")

As the authors say (White Knight paper, pg.17):
"Instead of stripping digital goods of their distinctive positive features, firms using DRM should instead increase the value of protected digital goods. So far, law abiding consumers are punished for their honesty: the digital goods they pay for have less features than pirated digital goods. Such consumers should, on the contrary be rewarded. It is clear when examining the current DRM policies used by the firms that they do not use DRM to its full potential, but merely as a way to capture additional surplus from honest consumers, who end up paying for pirating consumers. DRM is a very powerful tool, and it could enable firms to achieve near-first degree price discrimination [i.e. charging individual consumers differently, selling at a higher price to someone who's willing to pay more]. But this would certainly require a complete rethinking of firms marketing and pricing strategies."

Is there a form of DRM which would reduce piracy while protecting privacy?

DRM could theoretically be designed to collect enough information to allow first degree price discrimination (charging different individual customers different prices depending on how much the individual is prepared to pay), to make more money for suppliers. If DRM tracks every consumption of a particular good (e.g. each time you play a song), they can work out the value of the good (or type of good) to the individual consumer and charge them accordingly (Privacy paper pg. 6): "For example, consumers who listen to a certain group/artist on a regular basis could be charged a standard price when a new album is released, whereas those who are not familiar with this group/artist could be offered a discount to encourage the purchase."

However, current DRM systems are designed to collect relatively little user information. First degree price discrimination isn't commonly used for digital goods because it's unprofitable - as long there are alternative sources for digital goods, the consumer would switch to a different supplier for goods they value more, buying only goods they value less (and which are therefore priced lower, e.g. discounted) from the discriminating supplier, so the discriminating seller would end up making less and less money. Also, of course, consumers are unlikely to be willing to disclose enough information to allow firms to charge them more for the products they like! If DRM attempted to track this information, it would further reduce the demand for DRM protected goods and increase demand for unprotected goods.

The authors suggest the possibility of what they call a "mutually advantageous disclosure" or "rewarded disclosure" DRM system: firms would pay (share with?) consumers a certain portion of their higher profits (higher due to being able to apply first degree discrimination), in return for consumers disclosing the extra information, and the result would be better for consumers, firms and society (including allowing firms to make enough money to recover their initial sunk costs i.e. fixed costs of production). For the maths behind this idea, which I won't even try to go into, see their model of the demand function on pages 7-8 of the Privacy paper and their graph - quantity on the x axis, price on the y axis.

The authors note that price discrimination would be easier for repeat consumption products (music, software, games) or products supplied in parts (TV show series / serials) than for films or books. Also, consumers who value privacy intrinsically may still not be willing to disclose more personal information unless the reward is still higher - even so, the authors think the potential gains would make it worthwhile for suppliers to offer the higher reward.

[Personally, especially as I'm no expert in economics, I would find it helpful to see concrete hypothetical examples, with figures, of precisely how all this might work. What extra personal information would be given, precisely? How would suppliers calculate exactly what amount should be "given back" to which individual? Would each person get the same proportion, or would different individuals be rewarded differently? I think a major issue will be consumer trust, and transparency on the part of providers. Can consumers trust that suppliers, having been given enough information to make even more money from them, will actually then pay the due reward over to consumers? How will an individual know if the amount of "reward" rebated to them is the correct fair amount? And how do they ensure they'll receive the right amount?]

Another interesting idea is for an alternative type of DRM. Current DRM systems work by authenticating users and controlling lifespan. The authors suggest it's possible to design what they call a "rivalness-based DRM system" that ensures each unit of the digital good (e.g. a media file) can only be used by one consumer at one time - i.e. by:
  • identifying each unit of the good (e.g. through a unique ID code or serial number for each unit), rather than identifying the individual customer, and
  • contacting a central server before each consumption to check that the unit is not already being used.
The key point: any number of copies of a unit can be made, but if someone is already consuming a copy of that unit (e.g. playing a music or movie file), the server won't let anyone else play any other copy of the same unit. Owners of other units of that product can however play their units independently.

Example: say I record a death metal version of Greensleeves. With a rivalness-based DRM system, unit 1 of my recording is given the unique serial / ID no. of 001. Unit 2 bears unique serial no. 002. And so on. My mum, who buys unit 1 (001), can make as many copies of unit 1 as she likes, and store copies on e.g. her home computer, work computer, portable MP3 player, car MP3 player etc - but only one of those copies can be played at a time. As long as it's still playing on her home computer, the copy on her iPod (or car etc) can't be played. However, playing unit 001 won't stop the owner of 002 from playing their unit, because it has a different ID number (even though it's the same recording of the same song). So my best friend, who bought unit 2 ID no. 002 of my recording (with only a little arm twisting), can play her unit 2 at the same time as my mum is playing her unit 1. Different units can be used at the same time; but different copies of the same unit can't be.

This method has advantages for both suppliers and buyers:
  • It allows copying to different devices and for backup; only one copy can be played at a time, but other copies can be made and used.
  • It identifies the unit, not the consumer, and tracks usage of the unit, not the individual consumer - thus preserving personal privacy and anonymity.
  • It reduces consumers' willingness to share copies and so reduces the dissemination of illegal copies - people are happy to share music and video files because sharing doesn't deprive them of the use of their own copy in future. But if only one person can play a copy of any one unit at a time, the original owner / buyer of that unit won't be so willing to let other people have copies, because if someone else happens to be already playing a copy of the unit at a time when the original owner wants to play it, the original owner won't be able to play it; and the more people that have a copy of a unit, the more likely it is that someone else will be already using it when the owner (or anyone else) wants to. So, legitimate owners have good reasons not to share copies of their files (just as they'd be reluctant to lend someone else their car, or garden fork, if they know they're going to be needing it to use it). This also means that there will be fewer illegal copies around, as most copies would remain in the hands of only their original legal buyers, so if a crack is discovered suppliers would have more time to update their DRM system to counter it before illegal copies became too widespread.
  • It re-aligns the interests of suppliers and consumers by transferring the burden of piracy on to consumers. Currrently consumers don't suffer from piracy (except in the indirect, weak sense of fewer digital goods being created generally). In fact, they benefit from it, because piracy results in more goods they can get for free. With rivalness-based DRM, it's not just suppliers who suffer from piracy (due to decreased sales) - it's consumers too (due to inability to consume the goods they've bought if too many copies of their unit are in use).
  • It decreases the value of pirated digital goods and piracy generally - there's little incentive for consumers to acquire or disseminate pirated copies, because they can't use their copy if someone else is already using it, and the more copies that are spread around, the more likely it is that someone else is already using it, so in time it would become totally unusable. A pirating consumer currently bears a (small) risk of being sued for copyright breach; with a rivalness-based DRM system, the pirating consumer would a face a (more likely, and much larger) risk of not being able to use the pirated good, a risk that would increase as more people pirated it.
There's one obvious difficulty. A rivalness-based DRM system requires collecting usage information in real time, for all copies of particular units - but how can it do that if the media player or computer isn't connected to the Net at the time? The authors suggest collecting and storing the usage information and uploading it to the server only as and when the device is connected to the internet (or to a connected computer), and if it's found that more than one copy has been used on more than one machine at the same time, it will trigger a "punishment mechanism" stopping all copies of that unit from being used for a certain period of time (or other units, in the case of goods consumed only once). [Is this a loophole? It may be possible to ensure certain machines are never connected to the Net or to a computer that's connected to the Net; if copies are played only on those machines, their use can never be detected and they can never be "punished". If the device is a multimedia phone, checking existing usage over the Net would also incur data charges for the phone owner, perhaps behind the scenes, which won't endear the system to the owner unless they have a flat rate unlimited data plan; and if it's an N95 smartphone like mine, the owner can just deny the player permission to access the Net! I suppose the system could be set up so that if a unit's usage is not checked at least once every say 3 or 5 plays on the same device then it stops working, in order to force the owner to connect it.]

Their suggested alternative system is certainly ingenious and it will be interesting to see if anyone can create such a system and get it to work; personally, being keen on my privacy, I like the sound of this option much better than the "Keep existing DRM but pay consumers to disclose more personal information" option.

Now, on to how to monetize Web 2.0.

3. Monetizing Web 2.0

The really major issues for society which the authors raise on Web 2.0 (apart from the old chestnut of copyright), are:
  • How do you fund the production of original content on Web 2.0? The most successful strategy so far seems to be to provide free content whose production is funded though selling ads.

  • How do you incentivise the production of quality content? On Web 2.0, more often than not the monetary rewards go, not to the producers or creators of that content, but to intermediaries (with the exception of ad-funded blogs and the like).
Incentives to Web 2.0 content producers are currently mainly provided not through price, as in a traditional free market, but through advertisement income. The problem this creates is that the relationship between ads income and value is loose, so market distortions and inefficiencies are likely to result. [Here I'd have liked to see reference to studies on the links between ad income and actual market / social value, or other explanations why they're not closely coupled enough. Or maybe that's a principle that's just obvious to economists? Especially as the authors acknowledged that advertising has still proved to be the most successful model so far, I'd have liked more on why they think it doesn't work, and on why and in what way advertisement income and market value aren't tightly coupled enough.]

Furthermore, most Web 2.0 content creators are mainly incentivised by non-financial considerations (reputation, altruism etc) which don't tie in with the social value of what they produce.

Web 2.0 doesn't operate like a simple traditional market, because of two key factors: the economic nature of digital goods, explained above (which means the main issue with monetising Web 2.0 is the large volume of freeriding, i.e. piracy), and transaction costs coupled with the low value of most Web 2.0 content. So, what are "transaction costs"?

Transaction costs and search costs

In economics terms, transaction costs (the costs incurred in making an economic exchange) will include search costs (the costs of checking what's the best or most suitable product, where is it available, which seller offers the lowest price etc).

There are 2 types of search costs: external search costs (monetary cost and opportunity cost of the time taken searching) - depends on technology etc, and are usually the same for everyone; and internal search costs (cognitive costs) - which vary with the individual consumer (thinking time / load to formulate search queries, analyse results in order to make decisions, etc).

Search engines like Google have reduced external search costs effectively to zero, but cognitive costs remain, and indeed will grow with the amount of information to be processed. Search costs are further increased by the availability of more and more content generally (because Web 2.0 has facilitated content creation by the masses), more and more of which is relatively invisible / inaccessible content i.e. private" content which is not indexed by the search engine (whose existence also increases external search costs), and more and more of which is multimedia content like videos, images - not accessible to search engines unless the content producer tags it.

Web 2.0 - the costs, incentives and inefficiencies

It costs content creators / publishers time / money to tag their content accurately, and producers by and large don't directly benefit from their contributions, so why should they expend even more time / effort on tagging? So, they don't.

Now if I upload a video without tagging it, it costs other people (lots of people, potentially the whole connected world) extra time to try to find my video via current search engines (or, perhaps more likely, to find other stuff that they're really looking for amongst the extra "noise" added by my content!). The publisher fails to tag, yet it's society, not the publisher, who has to bear the extra (retrieval) costs. It costs society more than it cost the creator. (That's what economists call a "negative externality").

The result, as with other negative externalities: too much untagged multimedia content is produced by publishers, increasing search costs more and more, which, as the authors put it, is socially inefficient. Less content, but fully tagged, would actually be more beneficial to society.

Most contributors don't benefit directly (or indeed financially) from their own produced content. So, why do people contribute to Web 2.0? Incentives are similar to those with open source software: the immediate satisfaction of bug fixing/producing content, delayed benefits (reputation, ego gratification, career improvement), altruism and community identification. These incentives, along with the desire to publish accurate facts particularly in areas of personal interest (Wikipedia contributors), and the attractions of receiving positive (or indeed any) feedback, are sufficient to motivate production of at least some content.

Also, although it's rare, some professional bloggers do make money, a few of them good money, through ads, affiliate commissions, product sales, donations etc. However, substantial investment is needed first to generate reputation (participation in forums, online communities, social networking sites) and traffic (marketing the blog: knowledge of blog publishing software, feed aggregators, blog carnivals, SEO, tagging etc). Indeed, to create a successful blog, as much time has to be spent on marketing as creating content.

The main issue is that, presumably with the exception of professional blogs (which are rare), the (often subjective) incentives for Web 2.0 contributors, the private benefits perceived by the contributors, don't match up to the social value, the actual benefit to society, of their contributions: contributions benefit the contributor more than they benefit society. (In economics terms, there's a "positive externality".)

Although there are some incentives to produce valuable high quality Web 2.0 content, unfortunately there's no Web 2.0 mechanism to systematically ensure that the incentives for producers / creators match up closely enough with the social value of their contributions. (Despite the long tail concept, which might suggest that all content has some social value, research shows that in fact, on e.g. YouTube, 10% of videos in fact account for 90% of the views.)

Web 2.0 is unusual in being the source of both negative and positive externalities at the same time. The production of more Web 2.0 content costs society more than it costs the creator; and the extra content benefits the creator more than it benefits society (Web 2.0 paper pg. 9): "Unfortunately, this means that low (social) value content is very likely to be over-produced, while high (social) value content is, probably, produced in insufficient quantity, thereby leading to an inefficient outcome".

The large supply of content reduces the market value of all online content generally, and thereby reduces the supply of good quality content. Too much free (or low cost) low quality content competing for consumers' time and attention crowds out the good quality content (in economics, the market for "lemons"). Where there are information asymmetries between buyers and sellers, e.g. regarding the quality of Web 2.0 content (producers know more about the content than consumers do; consumers won't know the quality of the content till after they've consumed it), what's known as "adverse selection" applies: "bad" products are more likely to be selected than good ones.

Where there's adverse selection, consumers' willingness to pay is usually a weighted average of the quality present in the market. So if there's proportionally a lot of low quality content, consumers' willingness to pay becomes close to zero - so much so that higher quality content may be driven out of the market altogether.

To counter adverse selection, producers normally use "signalling" to indicate the quality of their product objectively and clearly. But signalling is unlikely to be effective in Web 2.0: usual signalling strategies (guarantees, money back etc) don't generally apply to Web 2.0 content; the cost (especially for bloggers) of signalling may not be thought worthwhile; the value of content to the consumer is often subjective rather than objective in Web 2.0, so it's hard to arrive at universal signalling criteria; and their lack of market knowledge / experience in assessing the market or social value of their product means the multitude of amateur producers may overestimate the quality of their own content and decide to signal, while producers of higher quality content may not signal; and finally, professional bloggers need to spend as much time promoting their blogs as writing them, as the current incentive system is more about the blogger's ability to play the rules of the game and make their blog more well known than others than it is about the quality of their content. [Is the point here that signalling doesn't work with blogs because it's fame rather than quality which matters there? Here I'd have liked to see more evidence / research and economics explanations as to why that's the case.]

In summary, Web 2.0 is largely non-monetary and suffers from economic inefficiencies like search costs, crowding out and adverse selection: the costs and benefits of producing extra content, to the creator and to society, don't match up, so too much low quality content is produced, which drowns out / drives out the good content.

Possible business models for Web 2.0

The authors, as previously mentioned, feel that the "free, funded by advertisements" business model often used in Web 2.0 is inefficient, in terms of the market and society, because there isn't close enough a link between ad income and the market / social value of the Web 2.0 content produced. What alternative models might be used?

Pay per use.
Monetizing Web 2.0 by switching from a free to "pay per use" or "pay per access" model would involve large transaction costs as there would be an enormous number of transactions owing to the huge amount of content and large numbers of producers.

Micropayments are the most likely way due to the relatively low value of most content. But micropayments won't be worthwhile, and so won't be adopted, unless the transaction costs are low enough - and that's not just the monetary costs e.g. payment systems fees, but also opportunity costs (time spent on and leading up to the purchase) and cognitive costs (again on and leading up to that particular purchase).

For consumers, "pay per use" costs them in terms not only of money but also time and cognitive costs, e.g. checking out options fully to ensure they're forking out their hard-earned (even micro) cash on the best value product for them. And with experience goods, even extensive research may not help - only actual consumption enables proper evaluation. Also, there are the costs of coordinating with the supplier, entering into a contractual relationship, and the potential costs of dealing with any problems that might arise (which people would shrug off if it was free). For relatively low value goods, would the extra "hassle factors" (including search costs) be worth it to consumers, even if the actual monetary spend is very small?

For producers / providers, even reducing transaction costs for pay per use might not make pay per use profitable enough - with information goods the best route to profit has normally been to bundle information goods and/or their use, so much so that different independent producers get together to package all their goods together into one bundle sold in one transaction (e.g. MacHeist), and thereby lower transaction costs.

All of which is why the authors think pay per use doesn't seem to be a viable way to monetise Web 2.0.

Subscription fees. A subscription route may be better (e.g. to a feed), as it would reduce the volume of transactions (subscription once per year instead of per use). But it won't work for all Web 2.0 content - e.g. irregularly published content is hard to price, and there would still be lots of transactions given the large number of content producers. A subscription via an intermediary like YouTube, probably flat fee to reduce transaction costs, might work - they could then further divvy up fees amongst creators based on e.g. usage. Yet no attempts along those lines have so far worked out. [Note: I'd have liked to see examples here, and suggestions of reasons based on principles of economics as to why they've not worked out and aren't considered viable.]

In Web 2.0, participants are both consumers and providers, so exchanges between them are many and frequent (which according to transaction cost theory is why firms and corporations arose: there's a point where transaction costs become too high for exchanges between individuals in a market environment, resulting in the creation of "non-market entities" like companies). Charging for so many exchanges would be very costly, particularly in terms of time and cognitive resources, so it's not surprising that a very collaborative environment like Web 2.0 has developed into an environment which is primarily non-monetary. But will it always stay that way?

Another way to monetise Web 2.0 - a "demand-driven" Web?

The authors believe the key challenge to monetising Web 2.0 is: how to better align incentives for producers with social value (i.e. how to incentivise providers to produce higher quality content), in light of the economic characteristics of digital goods as public goods (which means piracy can't in practice be prevented, so reproduction / distribution of existing digital goods won't be very profitable), without incurring excessive transaction costs (e.g. how to reduce the volume of transactions).

They suggest one method which would address all 3 of those issues: instigate a demand-driven Web 2.0, instead of or alongside the current supply-driven Web 2.0 - i.e. publish content (whether pre-existing or created to order) only to order.

That should incentivise the production of high quality content as those demanding it would be willing to pay for it, and producers who believe their content has high value would wait for demand before publishing (while those who think their content is relatively low value will still continue to publish).

Access to the first ever unit of any digital good can be fully controlled, so at the time of first publication it can be charged for, but after it's been published copies will become available to lots of people over time, so there's little point in insisting that every copy has to be charged for forever, or trying to restrict access or copying (and a compulsory charge only for the initial publication would also reduce transactions, and therefore transaction costs, compared with pay per use). However, because it will take time for the goods to spread amongst consumers, some people may still be willing to pay for early access to it (as in the case of e.g. breaking news), so it's possible for the producer and first buyers to charge for access in the meantime - which means initial buyers should be willing to pay even more for the first unit, as they can charge for access to their copy in the early days. A new field of research in economics has in fact shown that an efficient competitive market can be achieved with digital goods, even though they're public goods, as long as there is "finite expansibility" (i.e. they don't spread through the economy instantly).

The authors think such a demand-driven Web would involve an intermediary with whom potential providers register existing content on offer (e.g. tagged holiday pics) or their ability to produce content (e.g. coding skills), and potential consumers register their needs and demands. The intermediary's system would match suppliers and buyers, who would agree on price, etc. After supply of the digital good, although the producer retains the copyright in it everyone - buyers too - would be allowed to distribute or resell it. A reputation system could also be used (feedback, ratings etc presumably).

While a demand-based system would not guarantee an efficient market, it could work alongside and complement the existing Web 2.0 to increase the proportion of high quality content on Web 2.0 and reduce search costs (because less low value content would be published and/or it would incentivise the tagging of content to make it easier to match with demand).

[Question: this might work for some things, but how would it work for other things where there isn't an existing known demand, like new music? Fans of well known bands may be willing to pay for them to produce more songs, but what about unknowns? I suppose this is where something like Creative Commons free samples comes into it, to build up a fan base. But then persuading people to thereafter pay for new material - is that feasible? And if even the hugely popular bestselling Stephen King couldn't get fans to pay enough for him to finish The Plant serial, what hope for others? I'd love to hear the authors' thoughts on the "Give recordings away, make money on touring and merchandising" approach, or any alternatives. Songwriting to order, back to the days of commissioned composers, Mozart etc?]

Aside: demand-driven approach for NGO's?

The "demand-driven", "first buyer pays" (or perhaps, rather, "first request is paid for") concept seems to me to very much tie in with the approach Alan Mitchell wrote about in FT 23 March 2008 of The Key, an experimental problem-solving community formed by 2 UK government-sponsored agencies (including the Training and Development Agency for Schools) and provided by lifestyle and concierge / management services company Ten UK, which "tries to combine human beings’ ability to understand significance and meaning with the efficiencies of new technologies" in order to build up a very focused and relevant knowledge base for a particular community. It started offline, but is now moving to the Web.

The idea is to provide information within a tightly focused community facing similar problems (in this case head teachers and other school leaders) by using individuals’ specific questions to define the con­tent of what's provided (my emphases):

"In The Key’s first phase, school leaders phone or e-mail questions to researchers who find the best possible answer from official sources, experts and published res­earch. The researchers, some of whom are former school leaders, compile a full answer, with references, sources and suggestions for further reading, and tag it for future reference.

At first sight, the model looks econ­omic nonsense. Paying for hum­­an beings to research answers to tricky questions from potentially 20,000 school leaders, one by one, would be expensive. But they are all facing similar problems; and the more times a question comes up, the lower the cost per answer. The aim is to manage the resulting information so that each ans­wer adds to an ever-expanding knowledge base. In the first four weeks, half the questions required new research. At three months, nearly 90 per cent could be answer­ed using existing content...

...nine out of 10 users saying it has saved an average of five hours per question. In addition, most say it has improved the way the school works because better decisions are made more quickly..."

In the second phase previously-answered questions are being made available on the Internet for direct access to the knowledge base. Thereafter, the researchers should only need to answer new questions and update old ans­wers. "The goal is to turn ignorance (individuals’ questions) into a valuable resource".

While this approach could extend to other fields in both public and private sectors, important issues need to be addressed to make it workable. The service must be sufficiently usable and user-centric (in terms of not just the information content provided but also the structure of the website (e.g. number and type of questions asked) and language used (the words and phraseology actually used by school leaders in asking questions, rather than by policymakers or government officials). Costs management is necessary for the community to be economically viable; relative costs could rise too high if the volume of questions falls too much, which means the knowledge base will be less useful and may not be worth the investment, or if the community is too diffuse and not narrowly-focused enough, as that means the same questions will not be repeated and again the investment not worthwhile, or if too many users use phone or e-mail rather than web self-service for their questions (which they will if the search system isn't up to scratch). User confidence and trust in the answers need to be be maintained, i.e. that the answers will remain unbiassed (in this particular case, will politicians and civil servants stay out of the way and let policy implementers rather than policymakers set the information agenda?!) and also I think that the answers will be accurate (confidence that the researchers had suitable expertise in the field both theoretical and practical).

So it seems a possible business model would be: find a narrowly-focused field or community (a narrow focus seems to be good for blog SEO too, see item 2.7 of that post ), set up as an intermediary for that community (find and pay researchers with appropriate expertise to answer questions of course - as employees? or more likely independent contractors), charge for the service (perhaps just for running it, and/or a cut for each answer), but critically make sure the service and website are sufficiently well structured and useable for users to find what they are looking for efficiently and quickly. However, I think this approach would be less sustainable in an area where new questions keep cropping up (e.g. software support where they keep changing the software very often? Indeed, isn't this approach much the same as that already in use within corporations that provide support for their products or services, to build up their FAQs / customer knowledge base?).

Like me you may also wonder whether The Key's approach is more useful as a way to help a narrowly-focused, non-profitmaking community, like NGOs operating in certain niche or specialist fields where things don't change too rapidly, to club together (perhaps with some government support) to help build up a communal knowledge base more cost-effectively, than as a way to make profits (though it does seem to be a way for the intermediary, in this case Ten UK, to make money, especially given that it's government-financed or subsidised!). Widen the user base and charge others outside the original community a subscription fee to access it once it's been built up?

Back to the wider issue of the suggested demand-driven Web 2.0, I think usability, costs management and maintaining user trust will be as important for a demand-driven Web 2.0 system as for The Key. It will be interesting to see to what extent a demand-driven system arises and is profitable. And I'm still not sure what system could be devised that would reward and incentivise creators of Web 2.0 content adequately and more than the intermediary / middleman - to me, as a writer and musician, for the sake of the creative industries and innovation, a system that rewards producers more than distributors or middlemen would be the best. I do wonder if a demand-driven web would be more profitable for the intermediary matchmaker than the content providers...?

5 comments:

Anonymous said...

any economics curriculum worth its salt should have a diginomics course!

thansk for this illuminating post (with multiple lightbulb experiences!).... but some of it made me sad .... the victory of marketing over content...

Anonymous said...

Great post. Thanks for sharing

Anonymous said...

In regard to your questions, here are a few quick answers:
- "economics / management principles behind successfully monetising blogs": Although I haven't spent a lot of time thinking about it, it seems to me that a blog can be either a front-end to a complementary business (say, if you are a consultant, a blog can be used to "advertise" your services and skills), in which case the blog is not directly monetized, or the blog can be a source of income. In the latter case, nowadays, the only option, as far as I know, is to monetize it through advertisement. What you want them is as much traffic of possible in order to generate a lot of revenues. While in the first case what you really need is a large traffic from potential clients, in the second case, you want a large traffic in general. This leads us to the second question.

- "Why, in economics terms, it's more important to increase fame and popularity than quality?": Since there is imperfect information and that time is scarce (you cannot possibly spend time reading all the blogs), readers tend to focus on blogs they have heard about that are popular. In this sense, they seldom look for blogs to read. As a blog writer, time is limited as well. Spending more time on the blog increases quality, but even if your blog is of better quality than any other, nobody will read it if they don't know about it. Consequently, it is often more desirable to spend more time promoting your blog and less improving its quality. Although it should, of course, be of adequate quality, a well promoted blog will be more successful than other blogs of higher quality. That would not happen if reader had perfect information about the quality of all existing blog. Since this is not the case, increasing popularity might be more important than increasing quality.

- "I'd be interested to see examples of exactly what kinds of alternative sampling strategies the authors have in mind as better options.": This would require to know a bit more about the tastes of the consumers, but even without that, one could imagine that songs that have been constantly played on the radio of on TV do not require a lot of sampling. Probably 10-second samples around the chorus would be enough, since most consumers have already experienced the good and only need the sample in case they don't know the title. In contrast, songs with no airplay most likely need at least one-time full samples (people can listen to the song entirely), since people have no experience of them. Similarly, assuming that people are willing to give some information about their consumption habit, samples of artistes they really like can be small (less than a minute), whereas samples of artistes they do not know at all have to be large (full song or album, presumably multiple times). This is the same for genres. A blues lover probably only requires short samples of blues musique to be able to decide whether to buy or not, while large samples would be needed for classical or disco musique (assuming that these two genres are seldom consumed by this consumer).

-"I would find it helpful to see concrete hypothetical examples, with figures, of precisely how all this might work. What extra personal information would be given, precisely? How would suppliers calculate exactly what amount should be "given back" to which individual? Would each person get the same proportion, or would different individuals be rewarded differently? I think a major issue will be consumer trust, and transparency on the part of providers. Can consumers trust that suppliers, having been given enough information to make even more money from them, will actually then pay the due reward over to consumers? How will an individual know if the amount of "reward" rebated to them is the correct fair amount? And how do they ensure they'll receive the right amount?"
Basically, such a system requires that you completely give up you privacy (as far as you consumption of digital goods is concerned, of course). Let me give you an example related to music. The market price for a downloaded song is around £0.8. You pay this price regardless of the value the good has for you. Sometimes, songs are worse more than £0.8, in which case you buy them, sometimes there are not, in which case you don't buy them (and quite likely pirate them).

The deal offered by a company could be: you sign up and they start monitoring your consumption of musique, say for a month, which allows them to build a profile of your tastes. Then each time you download a song, there is a trial period during which you do not pay anything (say, a week, the first 10 times, etc.) after which, based on your profile (that is updated each time you consume a song) and on your consumption of the song, the company will charge you.

That is were the reward comes into play. Let's assume that since you have downloaded the song you have been listening to it almost 24/7. Based on your profile, this means that the actual value of the song for you is £200. That is the price that a first-degree price discriminating monopoly would charge you. You would pay such a price since corresponds to the value of this song you really like. Of course there is no monopoly in the music industry, so why would you accept to pay £200 when you can get the song for £0.8 from a competitor? Furthermore, if you did not have a choice, why would you accept to reveal information to the company, which otherwise would not know how much you like the good?

In our system, whenever the value of a song for you is higher than the market price, you get refunded the difference. Here you would pay £200 and be immediately reimbursed £199.2.

In case you did not really like the song (you listened to it episodically), the company will charge you accordingly. For example, based on your profile and consumption, the song as a value of £0.1, which is what is billed.

That is the reason to give up you privacy: you never actually pay more than the market price for songs you really like. For the one you don't really like you may end-up paying much less than the market price. For example, you download christmas songs, you pay little because you use them once a year. You download some songs for your parents in case they stay at your place, you pay little because they don't visit that often.

This is good for artistes too. The last two examples are motives for people to pirate. With this pricing system you still pay something to the artistes (as opposed to nothing when you pirate).

Of course at anytime, you can refuse the deal. If the company wants to charge you £200 and you consider this is not worse it, you just have to remove the song from your library and you won't be charged. This forces the company to be relatively honest.

Of course, you could be dishonest and download a song and, during the trial period, not listen to it, hoping that the company will think that you don't like it at all and charge you £0.0001. The system is such that, since your consumption is constantly monitored, it calculates when you've changed your mind. If you have paid £200 but never listen to the song anymore afterwards, the value of the song will be updated and you will get a refund according to the new calculated value. Likewise, if you pretend you don't like something during the trial period and then consume it intensively, you will be charged accordingly when the system picks up the difference of value.

I hope this example answers to all your questions, please let me know if this is not the case. Also, I'd be keen on having your opinion on such pricing system. Would you adopt it?

- "It may be possible to ensure certain machines are never connected to the Net or to a computer that's connected to the Net; if copies are played only on those machines, their use can never be detected and they can never be "punished": This is a problem for all DRM systems. That is why traditional "excludable" DRMs only allow to burn music onto CD a limited number of times. So in this sense, this "rival" DRMs are as week as the other DRM systems. The interest of this anonymous DRM scheme is that there are less and less devices that are not connected, so it is most of the time possible to monitor and punish. To improve the system, one could introduce that users have to pay a small fee if they want to copy music on non-connected devices (such as CDs).

- "Here I'd have liked to see reference to studies on the links between ad income and actual market / social value, or other explanations why they're not closely coupled enough": For a relation to exist between ads revenues and social value, the more a site, blog entry, picture, etc. is socially valuable, the more ads money should go to its creator. This basically implies that the creator hosts its own website, that the good is only available through this channel and that the creator receives all the money generated through advertising. You can see why this is usually not the case. Creators may run their own website, but it is seldom the only way to access to their creation and they extremely rarely deal with advertising companies directly. This has been worsen in Web 2.0, since the content is hosted on a third party site (YouTube, mySpace, WordPress, etc.). Ads money is given to these websites and there is no reason if what is redistributed to the creators (if anything is) is related to the popularity of social value of the content.

- "Is the point here that signalling doesn't work with blogs because it's fame rather than quality which matters there? Here I'd have liked to see more evidence / research and economics explanations as to why that's the case": Signaling only works if 1) it is more costly to signal when quality is lower 2) only good quality signals. The typical example is offering a guarantee on the second hand cars. The worse the quality of the car, the more guarantee is going to be costly so it may be the case that only owner of good cars offer a guarantee. The problem with blogs is that the cost of publishing is the same regardless of the quality and there is no system that would enable good blogs to signal their quality while being too costly for bad blogs. Furthermore, the quality of a blog is very subjective and it is likely that bad quality blogs still think their blog is good. This means that even if such a mechanism existed it would not work. Think about what happened in the example of secondhand cars if owner of bad cars thought their car were good and owner of good cars could think their car is bad. In this case the fact that a car would be offered with a guarantee would only reflect what their owner believe is the quality of the car and not the actual quality of the car.

- "I'd have liked to see examples here, and suggestions of reasons based on principles of economics as to why they've not worked out and aren't considered viable": a very simple reason why it does not work is that there is always a competitor that offers the same thing for free but founded with ads. Back to square one!

- "Question: this might work for some things, but how would it work for other things where there isn't an existing known demand, like new music? Fans of well known bands may be willing to pay for them to produce more songs, but what about unknowns? I suppose this is where something like Creative Commons free samples comes into it, to build up a fan base. But then persuading people to thereafter pay for new material - is that feasible? And if even the hugely popular bestselling Stephen King couldn't get fans to pay enough for him to finish The Plant serial, what hope for others? I'd love to hear the authors' thoughts on the "Give recordings away, make money on touring and merchandising" approach, or any alternatives. Songwriting to order, back to the days of commissioned composers, Mozart etc?": Digital goods are fully excludable as long as the first unit has not been released. Of course, that means that when the demand does not exist you have to create it somehow. This is true for every product though. That is why, even for a new brand of crisps, marketing, ads and samples are needed. This is no different for digital goods. Musicians have always been aware of that. Although it may have been the case in the past that artistes have been signed my majors without touring before, most bands obtained a contract after intensive touring and promoting. Not much change in the digital age then.

As to "The Plant", I would not call it a failure. If the data given on Wikipedia is correct, Stephen King obtained, before he decided to shelve the project:
200,000 * 75% * 1$ + 200,000 * 70% * 1$ + 200,000 * 75% * 1% + 200,000 * .48 * 2$ = £632,000 (assuming a constant demand). This seems quite a success to me! If King had done that without going through its publisher, virtually all of this money would have ended up in his pocket.

I have heard recently about a Jazz singer in the U.S. (sorry, I can't find the reference) who sold her album even before it was recorded and collected enough money upfront to record and distribute the album. So, as you mention, commissioning may be a solution. After all, this was used in times where Intellectual Property Rights did not exist. Now that the illusion that IPRs can work is disappearing, maybe people will end up realising the true nature of cultural products and go back to commissioning.

Ffflaneur said...

Re the graph – ah a good old “demand & supply” graph!! Don’t know which elements were unclear, so will just launch in a general descriptive rant to be sure to get you well & good confused and disgusted with micro-economics for years to come.

So: it’s all about “which Quantity at which Price”: the producer is willing to supply which Q at which P? = production function. The consumer is willing to buy which Q at which P? = consumption function. And producer and consumer will effectively trade at the Marketprice, where both lines, of the consumption function and of the production function, intersect.

There’s the downward sloping line (going through A, C, F) of the consumption function: the lower the Price the more Quantity people are willing to buy.

And there’s the production function: normally upward sloping: the higher the price the higher the quantity the producer is willing to produce.

See: http://tutor2u.net/economics/revision-notes/as-markets-consumer-surplus.html



But suppose the marginal cost (= cost of producing an extra unit) is not rising, but constant, then, in a situation of perfect competition, the producer would be willing to offer as much Q as possible at a price covering this marginal cost: hence in this case a horizontal line going through D & F.

But of course there’s not only the marginal cost of churning out an extra CD-disk (for instance) there’s also the whole initial investment going into the production of a CD (the scouting, the singing, the recording, the initial marketing etc). Just selling all those CD’s at Price D (marginal cost) will never pay back the initial investment costs. So a producer will only launch into the whole enterprise if he’s assured he can ask a higher price than just the marginal cost of making an extra CD-disc.

In this example the price B: the market price in a “monopolistic” situation. I call it monopolistic because, to be able to ask the price B for the CD, the producer should be protected against pirates offering that same CD at a lower price (eg at a price just covering the copy-costs). Hence Intellectual Property rights, patents etc. (the same argument goes for medical drugs: they’re cheap to copy but very expensive to develop initially, so society has to offer some protection against copies or else no one would have enough of a commercial stimulus to do any R&D).

Now about the consumer surplus: the point C is where the consumption function and the production function intersect: at price B an amount Z will be sold (they should have indicated “Z” as the point projected by C on the Q-axis). But in fact some consumers would have bought the product also at a higher price (they’re said to have a low price-elasticity): take a price between A and B on the P axis and project it on the demand-function: there’s indeed still a corresponding Q of consumer-demand.

Let me take myself as an example: a 3CD-box of Bach cantatas recorded by Koopman costs 50 Euro. If it would cost 75 Euro, I would still buy it … So the 50 Euro price represents somehow a “gift” of 25 EUR for me, my “surplus”. In the graph context: at market price B , the “consumer surplus” is the area ABC: there are some consumers who would have been willing to buy at prices above B too, but now get the goodies at B.

A producer could capture that “consumer surplus” if he could better discriminate amongst consumers: eg offer a cheap Best-of-Bach box to the masses at 49.99EUR, an Eternal-Bach-Selection box at 55 EUR for price-conscious Bach-lovers and a Super-Bonus-Bach-Box at 75 EUR for woefully un-price-conscious Bach-devotees like me.

CrisisMaven said...

A great, very comprehensice post! Will reference in upcoming post, thanks. In a similar vein: for all students (at least high-school and the undergraduates) and others with an interest or enrolled in economics I have started a blog which will comprise study literature in a more entertaining form than standard textbooks, see
CrisisMaven’s Economics Study Guide. While still in its initial stages it will be added to constantly and covers general ecenomics subjects while at the same time dealing with current topics from the news that provide a welcome backdrop to an elucidating chapter in economics as well, e.g. Of Mortgage Brokers, ARMs, Attrition and Marathons .